Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 3 Visitantes están viendo este tema.
https://www.expansion.com/economia/financial-times/2025/08/28/68b04acfe5fdea9d548b4580.htmlSaludos.
La moda de la "Historia aburrida": los vídeos automatizados con IA están reescribiendo el pasado en YouTubeTe hacen en pocas horas lo que a los youtubers expertos les lleva semanas, y además no te puedes fiar de lo que dicen. Pero crecen como las setashttps://www.genbeta.com/inteligencia-artificial/moda-historia-aburrida-videos-automatizados-ia-estan-reescribiendo-pasado-youtubeEn los últimos meses, YouTube se ha llenado de un fenómeno tan curioso como inquietante: los llamados vídeos de "Historia aburrida".A simple vista parecen inofensivos: narraciones monótonas sobre una amplia gama de curiosidades históricas, todo acompañado de imágenes supuestamente históricas y con una voz en inglés británico que invita al sueño.Pero tras esa fachada relajante se esconde un engranaje mucho más problemático: la mayoría de estos vídeos son generados de manera automática, casi industrial, por aplicaciones de inteligencia artificial.El boom del 'slop' históricoEl fenómeno no es aislado ni marginal. En apenas unos meses han surgido decenas de canales con nombres casi intercambiables (Sleepless Historian → “El historiador insomne”, Boring History Bites → “Bocados de historia aburrida”, History Before Sleep → “Historia antes de dormir”, Historian Sleepy → “El historiador somnoliento”...) que comparten un mismo patrón: vídeos interminables (muchos superan las tres horas), narrados con voces sintéticas que imitan un acento británico académico, y que abordan temas diseñados para captar clics.Los títulos lo dicen todo: “Unusual Medieval Cures for Common Illnesses” ("Curaciones medievales inusuales para enfermedades comunes"), “The Entire History of the American Frontier” ("Toda la historia de la frontera americana"), o incluso “What It Was Like to Visit a Brothel in Pompeii” ("Cómo era visitar un burdel en Pompeya"). Se trata de cebos irresistibles para quienes buscan curiosidades históricas o simplemente un acompañamiento para que suene de fondo mientras intentas.Detrás de esta avalancha está, indirectamente, el propio algoritmo de YouTube, que prioriza tres factores: duración, frecuencia de publicación y capacidad de retención.Las inteligencias artificiales cumplen esos requisitos con una eficiencia imposible para un creador humano: donde un historiador necesita semanas para leer bibliografía, diseñar un guion y grabar, la IA puede generar horas de material diario, remezclando información superficial obtenida online. ¿El resultado? Un río constante de contenido que inunda las recomendaciones, desplazando a vídeos más elaborados.Además, estos canales no solo producen a gran escala, sino que también se retroalimentan entre sí: muchos comparten o republican los mismos vídeos con pequeñas variaciones, creando un enjambre de clones que multiplica su presencia en las búsquedas.En algunos casos, incluso utilizan comentarios automatizados para reforzar la ilusión de comunidad: mensajes de "agradecimiento" supuestamente escritos por soldados en el frente u oyentes insomnes que aseguran que estos vídeos les cambian la vida. En realidad, son perfiles falsos que funcionan como propaganda encubierta del propio canal.El contraste con los divulgadores tradicionales es abismal. Pete Kelly, del canal History Time, dedica medio año a investigar un solo vídeo: consulta hasta veinte libros, revisa artículos académicos, viaja a yacimientos arqueológicos y cuida cada detalle visual.Frente a ese esfuerzo, un canal de "historia aburrida" puede subir una producción de cinco horas cada día y, gracias al algoritmo, alcanzar rápidamente cientos de miles de suscriptores.En suma, el boom del 'slop' histórico no responde al interés genuino por el pasado, sino a una lógica puramente algorítmica: maximizar tiempo de visualización con el menor coste posible.Voces sintéticas, historia falsaPete Kelly reconoce que sus visualizaciones han caído a la par que crecían los canales de "historia aburrida". Él mismo sospecha que algunas de estas IAs han sido entrenadas con su propia voz y estilo narrativo. Tanto, que ahora recibe comentarios acusándolo de ser un narrador artificial. Su respuesta ha sido aparecer en cámara para demostrar que sigue siendo de carne y hueso.Otros, como el antropólogo aficionado detrás de Ancient Americas, intentan contrarrestar la avalancha con transparencia: bibliografías extensas, fuentes citadas y materiales visuales acreditados. Sin embargo, incluso ellos admiten que el ruido de la IA dificulta llegar a nuevos públicos.En paralelo, algunos creadores migran a otros formatos —podcasts, Patreon, plataformas de audio— buscando espacios menos saturados. Otros, como The French Whisperer, directamente han diversificado para no depender de YouTube, al que consideran cada vez más contaminado.Más allá de esta 'competencia desleal', el verdadero peligro es la manera en que estas producciones alteran la historia: deforman el conocimiento histórico de los usuarios con relatos simplificados, superficiales y, en muchos casos, directamente falsos.El problema se agrava porque estos vídeos no se presentan como ficción, sino como historia educativa y, con el tiempo, pueden acabar filtrándose en la memoria colectiva. Al fin y al cabo, si eso ya pasaba desde mucho antes de la IA, de YouTube y de Internet (¿cuánta gente sigue pensando que fue Colón quien demostró a sus contemporáneos que la Tierra no era plana?), os podéis imaginar el potencial problemón que tenemos por delante.
CitarPeople Are Being Committed After Spiraling Into 'ChatGPT Psychosis'Posted by EditorDavid on Saturday June 28, 2025 @06:39PM from the seeking-asylum dept."I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.CitarThe consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts."When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."CitarIn certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.Saludos.
People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'Posted by EditorDavid on Saturday June 28, 2025 @06:39PM from the seeking-asylum dept."I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.CitarThe consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts."When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."CitarIn certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.