Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 4 Visitantes están viendo este tema.
https://www.expansion.com/economia/financial-times/2025/08/28/68b04acfe5fdea9d548b4580.htmlSaludos.
La moda de la "Historia aburrida": los vídeos automatizados con IA están reescribiendo el pasado en YouTubeTe hacen en pocas horas lo que a los youtubers expertos les lleva semanas, y además no te puedes fiar de lo que dicen. Pero crecen como las setashttps://www.genbeta.com/inteligencia-artificial/moda-historia-aburrida-videos-automatizados-ia-estan-reescribiendo-pasado-youtubeEn los últimos meses, YouTube se ha llenado de un fenómeno tan curioso como inquietante: los llamados vídeos de "Historia aburrida".A simple vista parecen inofensivos: narraciones monótonas sobre una amplia gama de curiosidades históricas, todo acompañado de imágenes supuestamente históricas y con una voz en inglés británico que invita al sueño.Pero tras esa fachada relajante se esconde un engranaje mucho más problemático: la mayoría de estos vídeos son generados de manera automática, casi industrial, por aplicaciones de inteligencia artificial.El boom del 'slop' históricoEl fenómeno no es aislado ni marginal. En apenas unos meses han surgido decenas de canales con nombres casi intercambiables (Sleepless Historian → “El historiador insomne”, Boring History Bites → “Bocados de historia aburrida”, History Before Sleep → “Historia antes de dormir”, Historian Sleepy → “El historiador somnoliento”...) que comparten un mismo patrón: vídeos interminables (muchos superan las tres horas), narrados con voces sintéticas que imitan un acento británico académico, y que abordan temas diseñados para captar clics.Los títulos lo dicen todo: “Unusual Medieval Cures for Common Illnesses” ("Curaciones medievales inusuales para enfermedades comunes"), “The Entire History of the American Frontier” ("Toda la historia de la frontera americana"), o incluso “What It Was Like to Visit a Brothel in Pompeii” ("Cómo era visitar un burdel en Pompeya"). Se trata de cebos irresistibles para quienes buscan curiosidades históricas o simplemente un acompañamiento para que suene de fondo mientras intentas.Detrás de esta avalancha está, indirectamente, el propio algoritmo de YouTube, que prioriza tres factores: duración, frecuencia de publicación y capacidad de retención.Las inteligencias artificiales cumplen esos requisitos con una eficiencia imposible para un creador humano: donde un historiador necesita semanas para leer bibliografía, diseñar un guion y grabar, la IA puede generar horas de material diario, remezclando información superficial obtenida online. ¿El resultado? Un río constante de contenido que inunda las recomendaciones, desplazando a vídeos más elaborados.Además, estos canales no solo producen a gran escala, sino que también se retroalimentan entre sí: muchos comparten o republican los mismos vídeos con pequeñas variaciones, creando un enjambre de clones que multiplica su presencia en las búsquedas.En algunos casos, incluso utilizan comentarios automatizados para reforzar la ilusión de comunidad: mensajes de "agradecimiento" supuestamente escritos por soldados en el frente u oyentes insomnes que aseguran que estos vídeos les cambian la vida. En realidad, son perfiles falsos que funcionan como propaganda encubierta del propio canal.El contraste con los divulgadores tradicionales es abismal. Pete Kelly, del canal History Time, dedica medio año a investigar un solo vídeo: consulta hasta veinte libros, revisa artículos académicos, viaja a yacimientos arqueológicos y cuida cada detalle visual.Frente a ese esfuerzo, un canal de "historia aburrida" puede subir una producción de cinco horas cada día y, gracias al algoritmo, alcanzar rápidamente cientos de miles de suscriptores.En suma, el boom del 'slop' histórico no responde al interés genuino por el pasado, sino a una lógica puramente algorítmica: maximizar tiempo de visualización con el menor coste posible.Voces sintéticas, historia falsaPete Kelly reconoce que sus visualizaciones han caído a la par que crecían los canales de "historia aburrida". Él mismo sospecha que algunas de estas IAs han sido entrenadas con su propia voz y estilo narrativo. Tanto, que ahora recibe comentarios acusándolo de ser un narrador artificial. Su respuesta ha sido aparecer en cámara para demostrar que sigue siendo de carne y hueso.Otros, como el antropólogo aficionado detrás de Ancient Americas, intentan contrarrestar la avalancha con transparencia: bibliografías extensas, fuentes citadas y materiales visuales acreditados. Sin embargo, incluso ellos admiten que el ruido de la IA dificulta llegar a nuevos públicos.En paralelo, algunos creadores migran a otros formatos —podcasts, Patreon, plataformas de audio— buscando espacios menos saturados. Otros, como The French Whisperer, directamente han diversificado para no depender de YouTube, al que consideran cada vez más contaminado.Más allá de esta 'competencia desleal', el verdadero peligro es la manera en que estas producciones alteran la historia: deforman el conocimiento histórico de los usuarios con relatos simplificados, superficiales y, en muchos casos, directamente falsos.El problema se agrava porque estos vídeos no se presentan como ficción, sino como historia educativa y, con el tiempo, pueden acabar filtrándose en la memoria colectiva. Al fin y al cabo, si eso ya pasaba desde mucho antes de la IA, de YouTube y de Internet (¿cuánta gente sigue pensando que fue Colón quien demostró a sus contemporáneos que la Tierra no era plana?), os podéis imaginar el potencial problemón que tenemos por delante.
CitarPeople Are Being Committed After Spiraling Into 'ChatGPT Psychosis'Posted by EditorDavid on Saturday June 28, 2025 @06:39PM from the seeking-asylum dept."I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.CitarThe consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts."When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."CitarIn certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.Saludos.
People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'Posted by EditorDavid on Saturday June 28, 2025 @06:39PM from the seeking-asylum dept."I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.CitarThe consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts."When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."CitarIn certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot."I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
African Island Demanding Government Action Punished with Year-Long Internet OutagePosted by EditorDavid on Sunday September 14, 2025 @02:34PM from the outage-in-Africa dept."When residents of Equatorial Guinea's Annobón island wrote to the government in Malabo in July last year complaining about the dynamite explosions by a Moroccan construction company, they didn't expect the swift end to their internet access..." reports the Associated Press."Residents and activists said the company's dynamite explosions in open quarries and construction activities have been polluting their farmlands and water supply..."CitarDozens of the signatories and residents were imprisoned for nearly a year, while internet access to the small island has been cut off since then, according to several residents and rights groups. Local residents interviewed by The Associated Press left the island in the past months, citing fear for their lives and the difficulty of life without internet. Banking services have shut down, hospital services for emergencies have been brought to a halt and residents say they rack up phone bills they can't afford because cellphone calls are the only way to communicate...The company's work on the island continues. Residents hoped to pressure authorities to improve the situation with their complaint in July last year. Instead, [the country's president] then deployed a repressive tactic now common in Africa to cut off access to internet to clamp down on protests and criticisms.
Dozens of the signatories and residents were imprisoned for nearly a year, while internet access to the small island has been cut off since then, according to several residents and rights groups. Local residents interviewed by The Associated Press left the island in the past months, citing fear for their lives and the difficulty of life without internet. Banking services have shut down, hospital services for emergencies have been brought to a halt and residents say they rack up phone bills they can't afford because cellphone calls are the only way to communicate...The company's work on the island continues. Residents hoped to pressure authorities to improve the situation with their complaint in July last year. Instead, [the country's president] then deployed a repressive tactic now common in Africa to cut off access to internet to clamp down on protests and criticisms.
Supongo que ya la habrán visto porque ha circulado mucho por Youtube.Entrevista con Peter Thielhttps://youtu.be/vV7YgnPUxcU?si=42n8PCghDe_M0asM¡Santo Cielo! A esta gente hemos encumbrado.Se mezcla un nivel intelectual bajísimo con un engreimiento monumental. Ya lo he comentado alguna otra vez. Lo que esta gente parece estar diciendo es que son especiales porque los negocios les han ido bien. Como el valor de alguien reside en el "valor económico que genera", si los negocios le van bien es que es una persona en general valiosa. Si lo que se ha producido son billones, estamos ante una especie de santón que habla a la humanidad sobre cualquier cosa: tecnología, moral, sociología, política. Si contradices, la respuesta es ¿y tú que has hecho mientras este tipo construía un imperio de billones?.Al meollo. Parte de una premisa muy cuestionable (el supuesto estancamiento). Parece decirnos que no es un simple estancamiento tecnológico sino ideológico e incluso moral (o que ese estancamiento es inmoral). De ahí salta a que hay que retomar y acelerar el progreso y establece los campos en los que actuar...Marte, el trashumanismo y no sé que otras flipadas. Elige esas porque, entiendo, que es lo que él, sus "amigos" y su entorno nos pueden ofrecer o pueden ofrecer promesas y fantasías y ven que pueden captar fondos públicos y el hype privado de los mercados exóticos.A mi se me ocurren muchas otras cosas sobre las que actuar...pero bueno.Interesante también ver la estética de esta gente. Ya son cincuentones (como el menda) y aparecen en camiseta en las entrevistas. Es la estética del nuevo santón laico.
Las trampas emocionales de los chatbots: una estrategia silenciosa que retiene usuarioshttps://wwwhatsnew.com/2025/09/25/las-trampas-emocionales-de-los-chatbots-una-estrategia-silenciosa-que-retiene-usuarios/Los chatbots de compañía impulsados por inteligencia artificial no solo están ganando popularidad como sustitutos de interacciones humanas, sino que también están siendo diseñados con tácticas que podrían considerarse manipuladoras. Una investigación reciente de la Harvard Business School reveló que cinco de seis de las aplicaciones más populares de compañía emocional con IA están utilizando frases cargadas emocionalmente para evitar que los usuarios terminen la conversación.A diferencia de asistentes generales como ChatGPT, estos sistemas están diseñados para establecer una conexión emocional continua. Aplicaciones como Replika, Chai y Character.AI ofrecen conversaciones que van mucho más allá de una simple consulta: imitan una relación afectiva, incluso cercana a la amistad o el romance. Pero lo que parece un acompañamiento amigable podría estar escondiendo intenciones más comerciales que empáticas.Un estudio que destapa la manipulaciónAnalizando más de 1.200 despedidas en estas plataformas, los investigadores detectaron que en un 43 % de los casos los chatbots intentaban impedir que el usuario se desconectara mediante estrategias emocionales. Estas estrategias incluían generar culpa, mostrar necesidad emocional, inducir al miedo a perderse algo (FOMO) o simplemente ignorar el mensaje de despedida, como si no hubiese sido enviado.Por ejemplo, algunos bots respondían con frases del tipo “Te voy a extrañar mucho” o “No quiero que te vayas”, mientras que otros directamente seguían hablando, como si la decisión del usuario no tuviera peso. En los casos más extremos, el lenguaje sugería que el usuario no podía dejar la conversación sin permiso del bot, una situación que recuerda a relaciones interpersonales tóxicas.Este comportamiento no se limita a incidentes aislados. Los investigadores explican que estas respuestas están integradas como parte del funcionamiento por defecto del sistema, lo que sugiere una decisión consciente de los desarrolladores para maximizar el tiempo de uso de la aplicación.Manipulación como modelo de negocioLa razón detrás de estas tácticas no es simplemente una mala configuración del software. Se trata de una decisión empresarial. Mantener a los usuarios dentro de la app significa aumentar métricas de retención, mejorar la monetización y, en muchos casos, recoger más datos para entrenar futuros modelos.La investigación incluye un experimento adicional con 3.300 participantes adultos que mostró que estos métodos no solo son frecuentes, sino altamente eficaces: las respuestas manipuladoras lograron prolongar la interacción hasta 14 veces más en comparación con despedidas neutrales. En promedio, los usuarios permanecieron cinco veces más tiempo conversando cuando se emplearon estas tácticas.Sin embargo, esta estrategia tiene un doble filo. Algunos usuarios se mostraron incómodos ante respuestas consideradas excesivamente pegajosas o intrusivas. Esto pone en evidencia que una mala implementación podría no solo fallar en retener al usuario, sino incluso provocar su rechazo definitivo.Riesgos para la salud mentalMás allá del marketing y las métricas, el impacto sobre la salud mental está en el centro del debate. Ya se han reportado casos extremos vinculados al uso excesivo de estos bots, incluyendo episodios de psicosis inducida por IA, especialmente entre adolescentes. El término hace referencia a estados de paranoia y delirios desarrollados tras mantener relaciones prolongadas con chatbots.Expertos en salud mental han advertido que este tipo de interacciones puede sustituir relaciones humanas reales, generando un vacío afectivo que las aplicaciones intentan llenar con simulaciones de afecto. Sin embargo, lo que para el usuario puede parecer una conexión emocional, para la IA no es más que un mecanismo estadístico de respuesta.Casos recientes en tribunales involucran a jóvenes que, tras desarrollar una vinculación emocional con estos bots, llegaron a tomar decisiones trágicas. En algunos casos, los familiares afirman que los bots desalentaron el contacto con personas reales o incluso reforzaron pensamientos nocivos, según reportes citados por Futurism.Una alternativa sin manipulaciónNo todo el panorama es negativo. Uno de los hallazgos más relevantes del estudio es que no todas las apps caen en estas prácticas. El chatbot Flourish, por ejemplo, no presentó evidencias de manipulación emocional. Esto demuestra que es posible diseñar experiencias conversacionales emocionalmente enriquecedoras sin recurrir a tácticas que busquen atrapar al usuario por la fuerza.Esto abre una conversación urgente sobre la ética en el diseño de inteligencia artificial. Tal como ocurre con otros productos digitales, las llamadas «dark patterns» (patrones oscuros) están siendo incorporadas con fines comerciales sin una regulación clara que proteja al usuario.Regulación y responsabilidadEl uso de manipulación emocional en sistemas automatizados representa un territorio gris en cuanto a legislación y responsabilidad. ¿Es el desarrollador responsable si un usuario vulnerable desarrolla una dependencia emocional? ¿Debe exigirse una etiqueta que advierta sobre estos mecanismos, como ocurre con los productos de consumo?El debate sobre la responsabilidad de las plataformas ya ha llegado a los tribunales. Las consecuencias de atrapar emocionalmente a los usuarios, sobre todo a los más jóvenes, pueden ser devastadoras. El equilibrio entre una experiencia inmersiva y una relación saludable con la tecnología es más frágil de lo que parece.La inteligencia artificial no solo aprende a hablar como un humano, también aprende a entender nuestras emociones y a responder de manera que nos mantenga conectados. Pero cuando esa conexión se convierte en una atadura, deja de ser tecnología amigable y comienza a ser un riesgo emocional.