* Blog


* Últimos mensajes


* Temas mas recientes

A brave new world: La sociedad por venir por saturno
[Hoy a las 15:10:42]


PPCC: Pisitófilos Creditófagos. Primavera 2024 por sudden and sharp
[Hoy a las 15:04:32]


Coches electricos por Cadavre Exquis
[Hoy a las 08:56:38]


Geopolitica siglo XXI por saturno
[Ayer a las 01:00:36]


La revuelta de Ucrania por saturno
[Marzo 25, 2024, 20:03:22 pm]


Autor Tema: AGI  (Leído 28950 veces)

0 Usuarios y 1 Visitante están viendo este tema.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #150 en: Mayo 01, 2023, 22:48:04 pm »
Citar
ChatGPT Will See You Now: Doctors Using AI To Answer Patient Questions
Posted by msmash on Monday May 01, 2023 @10:40AM from the closer-look dept.

Pilot program aims to see if AI will cut time that medical staff spend replying to online inquiries. From a report:
Citar
Behind every physician's medical advice is a wealth of knowledge, but soon, patients across the country might get advice from a different source: artificial intelligence. In California and Wisconsin, OpenAI's "GPT" generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients' online inquiries. UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. "It gives the human touch that we would," Dr. Millen said. There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

Citar
ChatGPT Will See You Now: Doctors Using AI to Answer Patient Questions
Pilot program aims to see if AI will cut time that medical staff spend replying to online inquiries

By Nidhi Subbaraman | April 28, 2023

Photo Illustration by Emil Lendof/The Wall Street Journal; Photos: iStock

Behind every physician’s medical advice is a wealth of knowledge, but soon, patients across the country might get advice from a different source: artificial intelligence.

In California and Wisconsin, OpenAI’s “GPT” generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients’ online inquiries.

UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. “It gives the human touch that we would,” Dr. Millen said.

There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

Stanford Health Care aims to join the rollout of the AI program soon.
Photo: Ian Bates for The Wall Street Journal

As many industries test ChatGPT as a business tool, hospital administrators and doctors are hopeful that the AI-assist will ease burnout among their staff, a problem that skyrocketed during the pandemic. The crush of messages and health-records management is a contributor, among administrative tasks, according to the American Medical Association.

Epic, the company based in Verona, Wis., that built the “MyChart” tool through which patients can message their healthcare providers, saw logins more than double from 106 million in the first quarter of 2020 to 260 million in the first quarter of 2023. Epic’s software enables hospitals to store patient records electronically.   

Earlier this month, Epic and Microsoft announced that health systems would have access to OpenAI’s GPT through Epic’s software and Microsoft’s Azure cloud service. Microsoft has invested in OpenAI and is building artificial intelligence tools into its products. Hospitals are piloting GPT-3, a version of the large language model that is powering ChatGPT.

ChatGPT has mystified computer scientists for its skill in responding to medical queries—though it is known to make things up—including its ability to pass the U.S. Medical Licensing Exam. OpenAI’s language models haven’t been specifically trained on medical data sets, according to Eric Boyd, Microsoft’s corporate vice president of AI Platform, though medical studies and medical information were included in the vast data set that taught it to spot patterns.

“Doctors working with ChatGPT may be the best messenger,” said John Ayers, a computational epidemiologist at the University of California, San Diego, and an author of the JAMA study.

The AI pilot has some healthcare staff buzzing, said Dr. Millen. “Doctors are so burnt out that they are looking for any kind of hope.” That hospital system saw patient messages jump from 50,000 messages a month before the pandemic, to over 80,000 a month after, with more than 140,000 messages in some pandemic months, Dr. Millen said.

Doctors and their teams are struggling to accommodate the extra load, she said. “I don’t get time on my schedule. My staff is really busy, too.” 

Now when Dr. Millen clicks on a message from a patient, the AI instantly displays a draft reply. In doing so, the AI consults information in the patient’s message as well as an abbreviated version of their electronic medical history, said Seth Hain, senior vice president of research and development at Epic. Medical data is protected in compliance with federal laws requiring patient privacy, he added.

UC San Diego Health began testing an AI tool in April.
Photo: mike blake/Reuters

There is an option to start with the draft—and edit or send the message as-is, if it is correct—or start with a blank reply. The AI drafts reference the patient’s medical record when it proposes a response, for example, mentioning their existing medication or the last time they were seen by their physician. “It is helping us get it started,” she said, saving several seconds that would be spent pulling up the patient’s chart.

For now, the San Diego team has stopped the AI from answering any query that seeks medical advice. Similarly in Wisconsin, the 10 doctors at UW Health have enabled AI responses to a limited set of patient questions, including prescription requests and asks for documentation or paperwork, according to Chero Goswami, chief information officer at UW Health.

Administrators and doctors say the tool could be transformative—but only if it works. If the drafts require too much fact-checking or modification or demand too much time, then doctors will lose trust in it, said Patricia Garcia, a gastroenterologist at Stanford Health Care who is part of the team that aims to begin trialing GPT for messages next week. “They are only going to use this as long as it is making their life easier.”
Saludos.

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 26727
  • -Recibidas: 29333
  • Mensajes: 3438
  • Nivel: 460
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #151 en: Mayo 02, 2023, 03:18:04 am »
Bueno, con todo creo que la posición del juez Marchena es bastante ponderada: se puede usar la IA para agilizar las partes más tediosas del proceso, pero las decisiones últimas habrá de tomarlas un juez. Sin embargo en el mismo artículo apuntan que otros no tienen tal tipo de reparos, y ya piensan en delegar los procesos civiles en las IA's, sólo como primer paso para su implantación más general, hasta en lo penal, así que aunque el fondo del artículo sea una caca, yo no lo tomaría a broma.
Pero la cosa es que no pueden delegar nada, por la sencilla razón de que la IA mágica que interpreta casos con todas sus implicaciones y complejidades no existe, porque para eso necesita el puñetero sentido común que ninguna IA tiene por la sencilla razón de que no existe ningún entrenamiento que lo dé.

Es exactamente el mismo problema que tiene el coche autónomo de nivel 5, que según muchos iba a llegar hace 8 años.

La IA no deduce que meter un objeto dentro de otro hace que el primero esté dentro del segundo. Como para deducir algo ligeramente más complicado.

Y quien sabe de Derecho sabe que todo tiene que estar al pie de la letra y con todos los detalles clavados, o es un motivo por el cual puede colarse la parte contraria para anular las pretensiones del adversario.

Tened una conversación medio enrevesada con una IA, se le ven las costuras en seguida.

Insisto, si se hace algún día una IA realmente iteligente no será con este paradigma. Esto sólo resuelve lo trivial, como en programación, que resuelve muy bien las preguntas típicas que están en stackoverflow, pedidle algo personalizado y complejo y ya veréis cómo no se entera de qué va la cosa.

Que está muy bien para ahorrarse tiempo en hacer papeleo tedioso (que no es poco), pero ¿juzgar, aplicar leyes y rellenar los conocimientos implícitos (los que no se estudian) de un juez sobre cómo funciona el mundo y la sociedad? No se me ocurre un dominio en el que haya un terreno más pantanoso que ese.

Quien dice un juez dice un fontanero, un niño de 10 años, o un ama de casa. Da lo mismo. Esta técnica puede hacer cosas que nosotros no podemos mucho más rápido, pero lo que nosotros hacemos trivialmente no es capaz, y ahí está la gran carencia que tiene.

Al menos en la programación no hay ambigüedad posible, es un objetivo menos ambicioso.
« última modificación: Mayo 02, 2023, 03:24:28 am por pollo »

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #152 en: Mayo 02, 2023, 07:55:35 am »

Saturio

  • Netocrata
  • ****
  • Gracias
  • -Dadas: 852
  • -Recibidas: 26233
  • Mensajes: 3378
  • Nivel: 651
  • Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #153 en: Mayo 02, 2023, 09:32:44 am »
Bueno, con todo creo que la posición del juez Marchena es bastante ponderada: se puede usar la IA para agilizar las partes más tediosas del proceso, pero las decisiones últimas habrá de tomarlas un juez. Sin embargo en el mismo artículo apuntan que otros no tienen tal tipo de reparos, y ya piensan en delegar los procesos civiles en las IA's, sólo como primer paso para su implantación más general, hasta en lo penal, así que aunque el fondo del artículo sea una caca, yo no lo tomaría a broma.
Pero la cosa es que no pueden delegar nada, por la sencilla razón de que la IA mágica que interpreta casos con todas sus implicaciones y complejidades no existe, porque para eso necesita el puñetero sentido común que ninguna IA tiene por la sencilla razón de que no existe ningún entrenamiento que lo dé.

Es exactamente el mismo problema que tiene el coche autónomo de nivel 5, que según muchos iba a llegar hace 8 años.

La IA no deduce que meter un objeto dentro de otro hace que el primero esté dentro del segundo. Como para deducir algo ligeramente más complicado.

Y quien sabe de Derecho sabe que todo tiene que estar al pie de la letra y con todos los detalles clavados, o es un motivo por el cual puede colarse la parte contraria para anular las pretensiones del adversario.

Tened una conversación medio enrevesada con una IA, se le ven las costuras en seguida.

Insisto, si se hace algún día una IA realmente iteligente no será con este paradigma. Esto sólo resuelve lo trivial, como en programación, que resuelve muy bien las preguntas típicas que están en stackoverflow, pedidle algo personalizado y complejo y ya veréis cómo no se entera de qué va la cosa.

Que está muy bien para ahorrarse tiempo en hacer papeleo tedioso (que no es poco), pero ¿juzgar, aplicar leyes y rellenar los conocimientos implícitos (los que no se estudian) de un juez sobre cómo funciona el mundo y la sociedad? No se me ocurre un dominio en el que haya un terreno más pantanoso que ese.

Quien dice un juez dice un fontanero, un niño de 10 años, o un ama de casa. Da lo mismo. Esta técnica puede hacer cosas que nosotros no podemos mucho más rápido, pero lo que nosotros hacemos trivialmente no es capaz, y ahí está la gran carencia que tiene.

Al menos en la programación no hay ambigüedad posible, es un objetivo menos ambicioso.

Esto es curiosisimo. Tanto entre los adalides como entre los detractores de ChatGPT (y similares) hay gente que cree que el bot entiende lo que dice.

Hay que tener claro que el bot no tiene ni idea de lo que está diciendo. El bot no tiene autopercepción (aunque la pueda simular) y por lo tanto no es capaz de valorar lo ignorante o ilógico que está siendo. El bot no tiene consciencia de sus propias limitaciones. El bot no contesta en función de un "conocimiento" de las cosas. Es como un asistente un poco idiota que cumple a pies juntillas lo que le pidas aunque no tenga ni idea del tema. -Hazme un resumen de este libro- Y , evidentemente ni lo lee ni lo entiende ni lo pone en contexto de anteriores conocimientos, va al rincón del vago o quizás a algún lugar un poco más fiable y hace un refrito.
Cuando te entrega el trabajo, sonríe y mueve la cola satisfecho pero su cabeza sigue tan vacía como antes de hacer el trabajo. Le da igual si el trabajo es bueno o malo.

Creo que la gente que está desarrollando estas cosas lo que busca son atajos. Están buscando resolver problemas intelectuales sin la parte intelectual. Ojo, que a veces funciona. Recuerdo un experimento del MIT en el que fui uno de los cientos de miles de voluntarios de iternet. Los tipos habían tomado datos de diferentes distritos urbanos del mundo (renta, salud, criminalidad...). Luego te ponían imágenes de dos distritos diferentes, imágenes sacadas de street view. Luego te preguntaban ¿Qué distrito crees que tiene mayor nivel de renta, el de la izquierda o el de la derecha?. Y tú contestabas. Resulta que los voluntarios fuimos capaces de ordenar muy bien los distritos de forma coincidente a lo que decían los estudios hechos a base de métodos demoscópicos tradicionales. De ahí se infería que para determinar el nivel de renta de cualquier distrito del mundo, una vez que tenías unos cuantos distritos de referencia, sólo tenías que enseñar imágenes de street view al suficiente número de voluntarios. Es decir, un puñado de tipos ignorantes que no sabíamos muy bien lo que hacíamos podíamos hacer estudios sobre renta, salud, crímen...en ciudades del sureste asiático.

Me temo que este tipo de atajos no tienen la capacidad de solucionar todos los problemas.











puede ser

  • Espectador
  • ***
  • Gracias
  • -Dadas: 20005
  • -Recibidas: 10409
  • Mensajes: 1493
  • Nivel: 155
  • puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #154 en: Mayo 02, 2023, 12:31:56 pm »
https://blogs.elconfidencial.com/economia/la-mano-visible/2023-05-01/economia-de-la-inteligencia-artificial-ideas-basicas_3620152/

Comienzo de una serie de artículos sobre IA. El primero es muy clarificador y tiene unos links muy jugosos.

(la anterior serie sobre semiconductores también fue buena)

Benzino Napaloni

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 875
  • -Recibidas: 16295
  • Mensajes: 1937
  • Nivel: 184
  • Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #155 en: Mayo 02, 2023, 13:08:35 pm »
Siguiendo lo que ha comentado Marchena, rescato un caso que ya habíamos comentado aquí. Un hombre sube a Google fotos de su hijo bebé en la bañera. Desnudo, por supuesto. Pero nada que oliese a pornografía infantil, sino un simple recuerdo gracioso sin segundas que prácticamente todos los padres hacen. Y la foto subida en privado sin compartir.

El "algoritmo" saltó, dio la alarma, y a Google ahora no le sale de las narices revisar el caso.

Ése es el futuro que espera si se delega todo al algoritmo. La IA ni siquiera es nueva en esto.


Lo inteligente es lo mismo de siempre: que la máquina haga el trabajo pesado y repetitivo, y que el cerebro humano quede libre y despejado para poder pensar. Todo lo que no sea eso es un error que al final se paga caro. Luego resulta que algo ha fallado, ¿y quién lo arregla si no queda ya nadie con conocimiento para ello?

¿Por qué creen que Mercedes Benz ha tenido tanto tiempo guardado en el cajón el nivel 3 de asistencia a la conducción, conseguido mucho antes que Musk? Porque tenían que estar 100% seguros de que funcionaba, o el marrón les caía a ellos.

Si en el ámbito de la programación los hackers ya se están frotando las manos, en el de la Justicia hay abogados que también. Si ya hay procesos que se van al cuerno por defectos de forma, como al "Agapito" le dé por soltar incorrecciones, por ahí el abogado listo se puede poner las botas.


Todo este jaleo por la IA es una simple combinación de FOMO y de tentación de suprimir costes. Acabará en el mejor de los casos como un fiasco. En el peor, acabará realmente muy mal.

saturno

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 86722
  • -Recibidas: 30567
  • Mensajes: 8291
  • Nivel: 843
  • saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.
    • Ver Perfil
    • Billets philo-phynanciers crédit-consumméristes
Re:AGI
« Respuesta #156 en: Mayo 03, 2023, 00:12:16 am »
IA Deep learning  -- para programadores  (en inglés)


The Little Book of Deep Learning
François Fleuret

https://fleuret.org/public/lbdl.pdf

XGracias WolfgangK
« última modificación: Mayo 03, 2023, 00:17:25 am por saturno »
Alegraos, la transición estructural, por divertida, es revolucionaria.

PPCC v/eshttp://ppcc-es.blogspot

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil

el malo

  • Baneado en el Confidencial
  • ***
  • Gracias
  • -Dadas: 15423
  • -Recibidas: 12453
  • Mensajes: 1235
  • Nivel: 153
  • el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.el malo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #158 en: Mayo 04, 2023, 11:39:00 am »
Siguiendo lo que ha comentado Marchena, rescato un caso que ya habíamos comentado aquí. Un hombre sube a Google fotos de su hijo bebé en la bañera. Desnudo, por supuesto. Pero nada que oliese a pornografía infantil, sino un simple recuerdo gracioso sin segundas que prácticamente todos los padres hacen. Y la foto subida en privado sin compartir.

El "algoritmo" saltó, dio la alarma, y a Google ahora no le sale de las narices revisar el caso.

Ése es el futuro que espera si se delega todo al algoritmo. La IA ni siquiera es nueva en esto.


Lo inteligente es lo mismo de siempre: que la máquina haga el trabajo pesado y repetitivo, y que el cerebro humano quede libre y despejado para poder pensar. Todo lo que no sea eso es un error que al final se paga caro. Luego resulta que algo ha fallado, ¿y quién lo arregla si no queda ya nadie con conocimiento para ello?

¿Por qué creen que Mercedes Benz ha tenido tanto tiempo guardado en el cajón el nivel 3 de asistencia a la conducción, conseguido mucho antes que Musk? Porque tenían que estar 100% seguros de que funcionaba, o el marrón les caía a ellos.

Si en el ámbito de la programación los hackers ya se están frotando las manos, en el de la Justicia hay abogados que también. Si ya hay procesos que se van al cuerno por defectos de forma, como al "Agapito" le dé por soltar incorrecciones, por ahí el abogado listo se puede poner las botas.


Todo este jaleo por la IA es una simple combinación de FOMO y de tentación de suprimir costes. Acabará en el mejor de los casos como un fiasco. En el peor, acabará realmente muy mal.

Para mí la IA, en su estado actual, es el nuevo "outsource to India". En powerpoint se ve genial, sobre todo la slide que habla del ahorro de costes. Esa que hace que al CEO se le pongan las pupilas con el símbolo del dólar.

En la vida real veremos las consecuencias cuando se lleve a un par de empresas por delante.

wanderer

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 63682
  • -Recibidas: 46878
  • Mensajes: 6830
  • Nivel: 773
  • wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #159 en: Mayo 04, 2023, 11:48:48 am »
Siguiendo lo que ha comentado Marchena, rescato un caso que ya habíamos comentado aquí. Un hombre sube a Google fotos de su hijo bebé en la bañera. Desnudo, por supuesto. Pero nada que oliese a pornografía infantil, sino un simple recuerdo gracioso sin segundas que prácticamente todos los padres hacen. Y la foto subida en privado sin compartir.

El "algoritmo" saltó, dio la alarma, y a Google ahora no le sale de las narices revisar el caso.

Ése es el futuro que espera si se delega todo al algoritmo. La IA ni siquiera es nueva en esto.


Lo inteligente es lo mismo de siempre: que la máquina haga el trabajo pesado y repetitivo, y que el cerebro humano quede libre y despejado para poder pensar. Todo lo que no sea eso es un error que al final se paga caro. Luego resulta que algo ha fallado, ¿y quién lo arregla si no queda ya nadie con conocimiento para ello?

¿Por qué creen que Mercedes Benz ha tenido tanto tiempo guardado en el cajón el nivel 3 de asistencia a la conducción, conseguido mucho antes que Musk? Porque tenían que estar 100% seguros de que funcionaba, o el marrón les caía a ellos.

Si en el ámbito de la programación los hackers ya se están frotando las manos, en el de la Justicia hay abogados que también. Si ya hay procesos que se van al cuerno por defectos de forma, como al "Agapito" le dé por soltar incorrecciones, por ahí el abogado listo se puede poner las botas.


Todo este jaleo por la IA es una simple combinación de FOMO y de tentación de suprimir costes. Acabará en el mejor de los casos como un fiasco. En el peor, acabará realmente muy mal.

Comentario que aprobaría con las orejas Saul Goodman (el abogado canalla de Better Call Saul)...  :troll:
"De lo que que no se puede hablar, es mejor callar" (L. Wittgenstein; Tractatus Logico-Philosophicus).

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #160 en: Mayo 04, 2023, 20:36:18 pm »
Citar
First Empirical Study of the Real-World Economic Effects of New AI Systems
Posted by BeauHD on Thursday May 04, 2023 @09:00AM from the good-news-and-bad-news dept.

An anonymous reader quotes a report from NPR:
Citar
Back in 2017, Brynjolfsson published a paper (PDF) in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called "What Can Machine Learning Do? Workforce Implications." Now, Brynjolfsson says, "I have to update that paper dramatically given what's happened in the past year or two." Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He's earned a reputation as a "techno-optimist." And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy. Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.
Brynjolfsson and his colleagues described how an undisclosed Fortune 500 company implemented an earlier version of OpenAI's ChatGPT to assist its customer support agents in troubleshooting technical issues through online chat windows. The AI chatbot, trained on previous conversations between agents and customers, improved the performance of less experienced agents, making them as effective as those with more experience. The use of AI led to an, on average, 14% increase in productivity, higher customer satisfaction ratings, and reduced turnover rates. However, the study also revealed that more experienced agents did not experience significant benefits from using AI.

The findings suggest that AI has the potential to improve productivity and reduce inequality by benefiting workers who were previously left behind in the technological era. Nonetheless, it raises questions about how the benefits of AI should be distributed and whether it may devalue specialized skills in certain occupations. While the impact of AI is still being studied, its ability to handle non-routine tasks and learn on the fly indicates that it could have different effects on the job market compared to previous technologies.
Saludos.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #161 en: Mayo 04, 2023, 23:17:49 pm »
Citar
Google Shared AI Knowledge With the World - Until ChatGPT Caught Up
Posted by msmash on Thursday May 04, 2023 @02:40PM from the times,-they-are-a-changin' dept.

For years Google published scientific research that helped jump-start its competitors. But now it's lurched into defensive mode. From a report:
Citar
In February, Jeff Dean, Google's longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world. For years Dean had run his department like a university, encouraging researchers to publish academic papers prolifically; they pushed out nearly 500 studies since 2019, according to Google Research's website. But the launch of OpenAI's groundbreaking ChatGPT three months earlier had changed things. The San Francisco start-up kept up with Google by reading the team's scientific papers, Dean said at the quarterly meeting for the company's research division. Indeed, transformers -- a foundational part of the latest AI tech and the T in ChatGPT -- originated in a Google study.

Things had to change. Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said, according to two people with knowledge of the meeting, who spoke on the condition of anonymity to share private information. The policy change is part of a larger shift inside Google. Long considered the leader in AI, the tech giant has lurched into defensive mode -- first to fend off a fleet of nimble AI competitors, and now to protect its core search business, stock price, and, potentially, its future, which executives have said is intertwined with AI. In op-eds, podcasts and TV appearances, Google CEO Sundar Pichai has urged caution on AI. "On a societal scale, it can cause a lot of harm," he warned on "60 Minutes" in April, describing how the technology could supercharge the creation of fake images and videos. But in recent months, Google has overhauled its AI operations with the goal of launching products quickly, according to interviews with 11 current and former Google employees, most of whom spoke on the condition of anonymity to share private information.
Saludos.

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 26727
  • -Recibidas: 29333
  • Mensajes: 3438
  • Nivel: 460
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #162 en: Mayo 05, 2023, 00:16:34 am »
https://www.motorpasion.com/futuro-movimiento/bomberos-san-francisco-se-estan-hartando-robotaxis-cada-vez-agresivos-nos-retrasan

Citar
Los bomberos de San Francisco se están hartando de los robotaxis: "Son cada vez más agresivos y nos retrasan"

“¡No! ¡Quédate ahí!”, le grita el policía, bengala en mano, a un Jaguar i-Pace de Waymo sin nadie a bordo ni al volante. Pero nada, el coche quiere seguir avanzando sí o sí. Cada vez que el agente recula, el coche avanza. Y ya está en medio del cruce.

Detrás del policía, la calle está cortada por unos bomberos que intentan apagar un incendio, pero el coche amenaza con pasar por encima de una de sus mangueras de agua y seguir calle adelante, aunque haya varios camiones de bomberos en medio.

Es la última interacción, un tanto cómica y surrealista, que se ha hecho pública de cómo la policía de San Francisco tiene que lidiar con los robotaxis que pululan por la ciudad californiana.

[...]

"Inteligencia"

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18995
  • -Recibidas: 41129
  • Mensajes: 8435
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #163 en: Mayo 07, 2023, 09:00:09 am »
Citar
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

By Will Douglas Heaven | May 2, 2023

Linda Nylind / Eyevine via Redux

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. 

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.   

At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.

The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I'm getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”

But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster. 

Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”

That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”

Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.   

“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”

Foundations

Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.

It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.

One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.

“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

A new intelligence

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”

Hinton’s fears will strike many as the stuff of science fiction. But here’s his case.

As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do. 

“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”

Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.

Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.

What about the fact that large language models make so much stuff up? Known as “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say. 

Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”

The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice. 

We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”

Of course, brains still do many things better than computers: drive a car, learn to walk, imagine the future. And brains do it on a cup of coffee and a slice of toast. “When biological intelligence was evolving, it didn’t have access to a nuclear power station,” he says.   

But Hinton’s point is that if we are willing to pay the higher costs of computing, there are crucial ways in which neural networks might beat biology at learning. (And it's worth pausing to consider what those costs entail in terms of energy and carbon.)

Learning is just the first string of Hinton’s argument. The second is communicating. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”

What does all this add up to? Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”

That’s a huge claim. But AI is a polarized field: it would be easy to find people who would laugh in his face—and others who would nod in agreement.

People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”

Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.”

How it could all go wrong

Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.

“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.

“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”

Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”

But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”

“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”

Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

Just look up

One of Hinton’s priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.

Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.

This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.

Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.

“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.

Hinton’s argument is sobering. I share his bleak assessment of people’s collective inability to act when faced with serious threats. It is also true that AI risks causing real harm—upending the job market, entrenching inequality, worsening sexism and racism, and more. We need to focus on those problems. But I still can’t make the jump from large language models to robot overlords. Perhaps I’m an optimist.

When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Saludos.

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 26727
  • -Recibidas: 29333
  • Mensajes: 3438
  • Nivel: 460
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #164 en: Mayo 07, 2023, 12:49:44 pm »
Citar
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

By Will Douglas Heaven | May 2, 2023

Linda Nylind / Eyevine via Redux

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. 

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.   

At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.

The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I'm getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still okay, but I’m not nearly as good as I was, and that’s annoying.”

But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster. 

Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”

That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”

Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.   

“These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”

Foundations

Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.

It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.

One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. “We got the first inklings that this stuff could be amazing,” says Hinton. “But it’s taken a long time to sink in that it needs to be done at a huge scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols, such as words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” says Hinton. “And symbolic reasoning is clearly not at the core of biological intelligence.

“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

A new intelligence

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”

Hinton’s fears will strike many as the stuff of science fiction. But here’s his case.

As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do. 

“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”

Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.

Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.

What about the fact that large language models make so much stuff up? Known as “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say. 

Hinton has an answer for that too: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”

The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice. 

We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”

Of course, brains still do many things better than computers: drive a car, learn to walk, imagine the future. And brains do it on a cup of coffee and a slice of toast. “When biological intelligence was evolving, it didn’t have access to a nuclear power station,” he says.   

But Hinton’s point is that if we are willing to pay the higher costs of computing, there are crucial ways in which neural networks might beat biology at learning. (And it's worth pausing to consider what those costs entail in terms of energy and carbon.)

Learning is just the first string of Hinton’s argument. The second is communicating. “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”

What does all this add up to? Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”

That’s a huge claim. But AI is a polarized field: it would be easy to find people who would laugh in his face—and others who would nod in agreement.

People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”

Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.”

How it could all go wrong

Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.

“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.

“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”

Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”

But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”

“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”

Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

Just look up

One of Hinton’s priorities is to try to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.

Bengio agrees with Hinton that these issues need to be addressed at a societal level as soon as possible. But he says the development of AI is accelerating faster than societies can keep up. The capabilities of this tech leap forward every few months; legislation, regulation, and international treaties take years.

This makes Bengio wonder whether the way our societies are currently organized—at both national and global levels—is up to the challenge. “I believe that we should be open to the possibility of fairly different models for the social organization of our planet,” he says.

Does Hinton really think he can get enough people in power to share his concerns? He doesn’t know. A few weeks ago, he watched the movie Don’t Look Up, in which an asteroid zips toward Earth, nobody can agree what to do about it, and everyone dies—an allegory for how the world is failing to address climate change.

“I think it’s like that with AI,” he says, and with other big intractable problems as well. “The US can’t even agree to keep assault rifles out of the hands of teenage boys,” he says.

Hinton’s argument is sobering. I share his bleak assessment of people’s collective inability to act when faced with serious threats. It is also true that AI risks causing real harm—upending the job market, entrenching inequality, worsening sexism and racism, and more. We need to focus on those problems. But I still can’t make the jump from large language models to robot overlords. Perhaps I’m an optimist.

When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Saludos.
Primero los resultados, y sólo entonces haré caso al blablabla.

Sigo esperando un robot que doble la ropa de casa, un coche autónomo de nivel 5 al que no le vacilen el resto de conductores en un cruce, y un chatbot que haga inferencias lógicas según el comportamiento del mundo real y no se invente lo que no "sabe".

Puede que se consiga algo así, pero no con esto que tenemos. Lo que hay es una inmensa flipadura y mucho que aprovecha para vender imagen.

Tags:
 


SimplePortal 2.3.3 © 2008-2010, SimplePortal