* Blog


* Últimos mensajes


* Temas mas recientes


Autor Tema: AGI  (Leído 28901 veces)

0 Usuarios y 1 Visitante están viendo este tema.

Marv

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 18084
  • -Recibidas: 18501
  • Mensajes: 1540
  • Nivel: 489
  • Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.Marv Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #45 en: Marzo 20, 2023, 08:55:52 am »
Todo el mundo se está apresurando en sacar conclusiones (cómo no, otro signo de los tiempos de estupidez en los que vivimos en los que cualquier atisbo de paciencia o reflexión se ha ido a tomar por culo hace tiempo)  sin haber dado ni siquiera un mes para ver para qué y para qué no es útil en la práctica esto, y para ver los primeros desastres por exceso de confianza (al igual que de vez en cuando se estrella el autopilot de Tesla, esto armará alguna cagada imprevisible).

Porque para trivialidades y ejercicios vale cualquier cosa, pero en el momento en el que alguien se mete con un proyecto realmente complejo y en el que haya cosas reales en juego y no esté plagado de ejemplos stackoverflow, a lo mejor sigue haciendo falta alguien que piense.

Y veremos qué efectos tiene esto en el aprendizaje. Dentro de unos años, cuando la inmensa mayoría de adultos se hayan vuelto aún más idiotas de lo que ya son la mayoría, quizá llegaremos por fin a la plena idiocracia.

lo que me da miedo es, como siempre, el exceso de confianza de la gente

aún es pronto, pero probando chatGPT, de momento, me queda clarísimo que no me puedo fiar de lo que dice

es especialmente bueno en hacer como que sabe lo que responde, y en muchas ocasiones son invenciones - pero ojo, invenciones muy plausibles que parece mentira que se las pueda haber inventado, imagino que por maximizar la verosimilitud de las cosas incluso cuando no hay confianza factual detrás de ellas (¿cómo lo distinguiría el algoritmo?)

"miente" muy bien, y consigue impresionar de eso no cabe duda

(del otro día)


todo lo que dice estaría bien de no ser patentemente falso y mezclar datos correctos con datos erróneos que podrían despistar completamente a quien no conociera la respuesta de antemano

(hoy probando a ver si me servía para resolver algo)

 parece plausibilísimo, sin embargo esas opciones de menú simplemente no existen y no han existido

(más adelante buscando una alternativa)

no lo sabe, o se ha inventado el programa, y simplemente se inventa un autor que es un novelista ruso

luego dice otro, que por lo menos es un programador pero no parece que haya hecho nada parecido



chatGPT parece no diferenciar en ocasiones entre parecer saber y saber, que para algunas cosas como política o dar palique puede perféctamente válido - con cuatro cambios te vale casi todo y escribe bastante bien

pero si lo que preguntas tiene una respuesta objetiva, o tiene que funcionar, es "hit and miss" a veces te puede hasta despistar y hacerte cometer un error garrafal

para recordarte algo que ya sabías sí creo que tal como está muchas veces te sirve

Coge tu post y suatituye ChatGPT por “un periodista” en todo el texto. ::)

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #46 en: Marzo 20, 2023, 14:25:27 pm »
Bing chat también lleva una versión de GPT, no lo hace mejor:


muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #47 en: Marzo 20, 2023, 19:30:08 pm »
clonando la voz de Steve Jobs con un modelo TTS y usando chatGPT para generar sus respuestas:

https://twitter.com/BEASTMODE/status/1637613704312242176

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #49 en: Marzo 21, 2023, 13:19:42 pm »
https://zero123.cs.columbia.edu/
Citar
Zero-1-to-3: Zero-shot One Image to 3D Object





Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18985
  • -Recibidas: 41053
  • Mensajes: 8417
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #50 en: Marzo 21, 2023, 22:25:44 pm »
Citar
Google is Releasing Its Bard AI Chatbot To the Public
Posted by msmash on Tuesday March 21, 2023 @10:15AM from the moving-forward dept.

Google says it's ready to let the public use its generative AI chatbot, Bard. The company will grant tens of thousands of users access to the bot in a gradual rollout that starting Tuesday. From a report:
Citar
Google says people will use the chatbot, which will be available online and as a mobile app, for things like generating ideas ("Bard, how do I keep my plants alive?"), researching ideas (in combination with Search), and drafting first drafts of letters, invites, or proposals. Google originally announced Bard February 6, alongside some generative AI search functions and developer tools. On March 14, it announced that it will integrate generative AI features across the apps in its Workspace productivity suite. But today marks the first time that Google has released a generative AI chatbot powered by a large language model to the public. Google says the bot is powered by a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time.
Saludos.

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #51 en: Marzo 22, 2023, 09:00:10 am »
 :biggrin:

Google Bard estaría a favor del Departamento de Justicia de EEUU en un caso de antitrust contra Google por monopolio de anuncios y por prácticas anti-competitivas.

« última modificación: Marzo 22, 2023, 11:07:03 am por muyuu »

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18985
  • -Recibidas: 41053
  • Mensajes: 8417
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #52 en: Marzo 22, 2023, 19:44:19 pm »
Ayer unos investigadores de OpenAI publicaron un paper en el que analizaban el impacto que podrían tener los LLMs (Large Language Models) en el mercado laboral (*)

GPTs are GPTs: An early look at the labor market impact potential of large language models














(*)El título del paper "GPTs are GPTs:[...]" es un juego de palabras, ya que GPT puede ser tanto el acrónimo de "Generative Pre-trained Transformers" como el de "general-purpose technologies"
He seleccionado aquellas páginas del paper que creo que pueden resultar más interesantes
De nuevo –y sé que saturno se va a enfadar– probablemente este post debería ir (también) en el hilo de El fin del trabajo

Saludos.

P.D. Tienen el PDF completo del paper aquí.
« última modificación: Marzo 22, 2023, 21:08:10 pm por Cadavre Exquis »

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #53 en: Marzo 23, 2023, 17:00:29 pm »
https://aisnakeoil.substack.com/p/openais-policies-hinder-reproducible

Citar
OpenAI’s policies hinder reproducible research on language models

LLMs have become privately-controlled research infrastructure

[...]

Citar
The importance of reproducibility
Reproducibility—the ability to independently verify research findings—is a cornerstone of research. Scientific research already suffers from a reproducibility crisis, including in fields that use ML.

Since small changes in a model can result in significant downstream effects, a prerequisite for reproducible research is access to the exact model used in an experiment. If a researcher fails to reproduce a paper’s results when using a newer model, there’s no way to know if it is because of differences between the models or flaws in the original paper. 

OpenAI responded to the criticism by saying they'll allow researchers access to Codex. But the application process is opaque: researchers need to fill out a form, and the company decides who gets approved. It is not clear who counts as a researcher, how long they need to wait, or how many people will be approved. Most importantly, Codex is only available through the researcher program “for a limited period of time” (exactly how long is unknown).

OpenAI regularly updates newer models, such as GPT-3.5 and GPT-4, so the use of those models is automatically a barrier to reproducibility. The company does offer snapshots of specific versions so that the models continue to perform in the same way in downstream applications. But OpenAI only maintains these snapshots for three months. That means the prospects for reproducible research using the newer models are also dim-to-nonexistent.

Researchers aren't the only ones who could want to reproduce scientific results. Developers who want to use OpenAI's models are also left out. If they are building applications using OpenAI's models, they cannot be sure about the model's future behavior when current models are deprecated. OpenAI says developers should switch to the newer GPT 3.5 model, but this model is worse than Codex in some settings.

LLMs are research infrastructure
Concerns with OpenAI's model deprecations are amplified because LLMs are becoming key pieces of infrastructure. Researchers and developers rely on LLMs as a foundation layer, which is then fine-tuned for specific applications or answering research questions. OpenAI isn't responsibly maintaining this infrastructure by providing versioned models.

Researchers had less than a week to shift to using another model before OpenAI deprecated Codex. OpenAI asked researchers to switch to GPT 3.5 models. But these models are not comparable, and researchers' old work becomes irreproducible. The company's hasty deprecation also falls short of standard practices for deprecating software: companies usually offer months or even years of advance notice before deprecating their products.


[...]

Citar
Open-sourcing LLMs aids reproducibility
LLMs hold exciting possibilities for research. Using publicly available LLMs could reduce the resource gap between tech companies and academic research, since researchers don't need to train LLMs from scratch. As research in generative AI shifts from developing LLMs to using them for downstream tasks, it is important to ensure reproducibility.

OpenAI's haphazard deprecation of Codex shows the need for caution when using closed models from tech companies. Using open-source models, such as BLOOM, would circumvent these issues: researchers would have access to the model instead of relying on tech companies. Open-sourcing LLMs is a complex question, and there are many other factors to consider before deciding whether that's the right step. But open-source LLMs could be a key step in ensuring reproducibility.

[...]

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #54 en: Marzo 23, 2023, 21:37:44 pm »
https://openai.com/blog/chatgpt-plugins

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/



lo bueno que tiene es que cuando te de un resultado de Wolfram, normalmente te dará o una fuente o por lo menos no se lo habrá inventado

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18985
  • -Recibidas: 41053
  • Mensajes: 8417
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #56 en: Marzo 23, 2023, 22:38:31 pm »
Citar
OpenAI is Massively Expanding ChatGPT's Capabilities To Let It Browse the Web
Posted by msmash on Thursday March 23, 2023 @02:41PM from the aggressive-expansion dept.

OpenAI is adding support for plug-ins to ChatGPT -- an upgrade that massively expands the chatbot's capabilities and gives it access for the first time to live data from the web. From a report:
Citar
Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. In an announcement post, the company says it's almost like letting other services be ChatGPT's "eyes and ears." In one demo video, someone uses ChatGPT to find a recipe and then order the necessary ingredients from Instacart. ChatGPT automatically loads the ingredient list into the shopping service and redirects the user to the site to complete the order. OpenAI says it's rolling out plug-in access to "a small set of users." Initially, there are 11 plug-ins for external sites, including Expedia, OpenTable, Kayak, Klarna Shopping, and Zapier. OpenAI is also providing some plug-ins of its own, one for interpreting code and one called "Browsing," which lets ChatGPT get information from the internet.
Saludos.

muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #57 en: Marzo 23, 2023, 23:07:39 pm »
https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

Citar
Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’


The godfather of virtual reality has worked beside the web’s visionaries and power-brokers – but likes nothing more than to show the flaws of technology. He discusses how we can make AI work for us, how the internet takes away choice – and why he would ban TikTok

Citar
Jaron Lanier, the godfather of virtual reality and the sage of all things web, is nicknamed the Dismal Optimist. And there has never been a time we’ve needed his dismal optimism more. It’s hard to read an article or listen to a podcast these days without doomsayers telling us we’ve pushed our luck with artificial intelligence, our hubris is coming back to haunt us and robots are taking over the world. There are stories of chatbots becoming best friends, declaring their love, trying to disrupt stable marriages, and threatening chaos on a global scale.

Is AI really capable of outsmarting us and taking over the world? “OK! Well, your question makes no sense,” Lanier says in his gentle sing-song voice. “You’ve just used the set of terms that to me are fictions. I’m sorry to respond that way, but it’s ridiculous … it’s unreal.” This is the stuff of sci-fi movies such as The Matrix and Terminator, he says.

Lanier doesn’t even like the term artificial intelligence, objecting to the idea that it is actually intelligent, and that we could be in competition with it. “This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”

I flush and smile. Flush because I’m embarrassed, smile because I’m relieved. I’ll take my bollocking happily, I say. He squeals with laughter. “Hehehehe! OK. Hehehehe!” But he doesn’t want us to get complacent. There’s plenty left to worry about: human extinction remains a distinct possibility if we abuse AI, and even if it’s of our own making, the end result is no prettier.

Lanier, 62, has worked alongside many of the web’s visionaries and power-brokers. He is both insider (he works at Microsoft as an interdisciplinary scientist, although he makes it clear that today he is talking on his own behalf) and outsider (he has constantly, and presciently, exposed the dangers the web presents). He is also one of the most distinctive men on the planet – a raggedy prophet with ginger dreads, a startling backstory, an eloquence to match his gargantuan brain and a giggle as alarming as it is life-enhancing.

Although a tech guru in his own right, his mission is to champion the human over the digital – to remind us we created the machines, and artificial intelligence is just what it says on the tin. In books such as You Are Not a Gadget and Ten Reasons For Deleting Your Social Media Accounts, he argues that the internet is deadening personal interaction, stifling inventiveness and perverting politics.

We meet on Microsoft’s videoconference platform Teams so that he can show a recent invention of his that enables us to appear in the same room together even though we are thousands of miles apart. But the technology isn’t working in the most basic sense. He can’t see me. Doubtless he’ll be pleased in a way. There’s nothing Lanier likes more than showing technology can go wrong, especially when operated by an incompetent at the other end. So we switch to the rival Zoom.

Lanier’s backdrop is full of musical instruments, including a row of ouds hanging from the ceiling. In his other life, he is a professional contemporary classical musician – a brilliant player of rare and ancient instruments. Often he has used music to explain the genius and limitations of tech. At its simplest, digital technology works in a on/off way, like the keys on a keyboard, and lacks the endless variety of a saxophone or human voice.

Citar
“From my perspective,” he says, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”

Now I’m feeling less relieved. Death by insanity doesn’t sound too appealing, and it can come in many forms – from world leaders or terrorists screwing with global security AI to being driven bonkers by misinformation or bile on Twitter. Lanier says the more sophisticated technology becomes, the more damage we can do with it, and the more we have a “responsibility to sanity”. In other words, a responsibility to act morally and humanely.

Lanier was the only child of Jewish parents who knew all about inhumanity. His Viennese mother was blond and managed to talk her way out of a concentration camp by passing as Aryan. She then moved to the US, working as a pianist and stocks trader. His father, whose family had been largely wiped out in Ukrainian pogroms, had a range of jobs from architect to science editor of pulp science-fiction magazines and eventually elementary-school teacher. Lanier was born in New York, but the family soon moved west. When he was nine, his mother was killed after her car flipped over on the freeway on her way back from passing her driving test.

Both father and son were left traumatised and impoverished; his mother had been the main breadwinner. The two of them moved to New Mexico, living in tents before 11-year-old Lanier started to design their new house, a geodesic dome that took seven years to complete. “It wasn’t good structurally, but it was good therapeutically,” he says. In his 2017 memoir, Dawn of the New Everything, Lanier wrote that the house looked “a little like a woman’s body. You could see the big dome as a pregnant belly and the two icosahedrons as breasts.”

He was ludicrously bright. At 14, he enrolled at New Mexico State University, taking graduate-level courses in mathematical notation, which led him to computer programming. He never completed his degree, but went to art school and flunked out. By the age of 17 he was working a number of jobs, including goat-keeper, cheese-maker and assistant to a midwife. Then, by his early 20s, he had became a researcher for Atari in California. When he was made redundant, he focused on virtual reality projects, co-founding VPL Research to commercialise VR technologies. He could have easily been a tech billionaire had he sold his businesses sensibly or at least shown a little interest in money. As it stands, he tells me he has done very nicely financially, and obscene wealth wouldn’t have sat with his values. Today, he lives in Santa Cruz in California with his wife and teenage daughter.

Although many of the digital gurus started out as idealists, to Lanier there was an inevitability that the internet would screw us over. We wanted stuff for free (information, friendships, music), but capitalism doesn’t work like that. So we became the product – our data sold to third parties to sell us more things we don’t need. “I wrote something that described how what we now call bots will be turned into these agents of manipulation. I wrote that in the early 90s when the internet had barely been turned on.” He squeals with horror and giggles. “Oh my God, that’s 30 years ago!”

Actually, he believes bots such as OpenAI’s ChatGPT and Google’s Bard could provide hope for the digital world. Lanier was always dismayed that the internet gave the appearance of offering infinite options but in fact diminished choice. Until now, the primary use of AI algorithms has been to choose what videos we would like to see on YouTube, or whose posts we’ll see on social media platforms. Lanier believes it has made us lazy and incurious. Beforehand, we would sift through stacks in a record shop or browse in bookshops. “We were directly connected to a choice base that was actually larger instead of being fed this thing through this funnel that somebody else controls.”

Citar
Take the streaming platforms, he says. “Netflix once had a million-dollar prize contest to improve their algorithm, to help people sort through this gigantic space of streaming options. But it has never had that many choices. The truth is you could put all of Netflix’s streaming content on one scrollable page. This is another area where we have a responsibility to sanity, he says – not to narrow our options or get trapped in echo chambers, slaves to the algorithm. That’s why he loves playing live music – because every time he jams with a band, he creates something new.

skip past newsletter promotion

Citar
For Lanier, the classic example of restricted choice is Wikipedia, which has effectively become the world’s encyclopedia. “Wikipedia is run by super-nice people who are my friends. But the thing is it’s like one encyclopedia. Some of us might remember when on paper there was both an Encyclopedia Britannica and Encyclopedia Americana and they provided different perspectives. The notion of having the perfect encyclopedia is just weird.”

So could the new chatbots challenge this? “Right. That’s my point. If you go to a chatbot and say: ‘Please can you summarise the state of the London tube?’ you’ll get different answers each time. And then you have to choose.” This programmed-in randomness, he says, is progress. “All of a sudden this idea of trying to make the computer seem humanlike has gone far enough in this iteration that we might have naturally outgrown this illusion of the monolithic truth of the internet or AI. It means there is a bit more choice and discernment and humanity back with the person who’s interacting with the thing.”

That’s all well and good, but what about AI replacing us in the workplace? We already have the prospect of chatbots writing articles like this one. Again, he says it’s not the technology that replaces us, it’s how we use it. “There are two ways this could go. One is that we pretend the bot is a real thing, a real entity like a person, then in order to keep that fantasy going we’re careful to forget whatever source texts were used to have the bot function. Journalism would be harmed by that. The other way is you do keep track of where the sources came from. And in that case a very different world could unfold where if a bot relied on your reporting, you get payment for it, and there is a shared sense of responsibility and liability where everything works better. The term for that is data dignity.”

It seems too late for data dignity to me; the dismal optimist is in danger of being a utopian optimist here. But Lanier soon returns to Planet Bleak. “You can use AI to make fake news faster, cheaper and on greater scales. That combination is where we might see our extinction.”

In You Are Not a Gadget, he wrote that the point of digital technology was to make the world more “creative, expressive, empathic and interesting”. Has it achieved that? “It has in some cases. There’s a lot of cool stuff on the internet. I think TikTok is dangerous and should be banned yet I love dance culture on TikTok and it should be cherished.” Why should it be banned? “Because it’s controlled by the Chinese, and should there be difficult circumstances there are lots of horrible tactical uses it could be put to. I don’t think it’s an acceptable risk. It’s heartbreaking because a lot of kids love it for perfectly good reasons.”

Citar
As for Twitter, he says it has brought out the worst in us. “It has a way of taking people who start out as distinct individuals and converging them into the same personality, optimised for Twitter engagement. That personality is insecure and nervous, focused on personal slights and affronted by claims of rights by others if they’re different people. The example I use is Trump, Kanye and Elon [Musk, who now owns Twitter]. Ten years ago they had distinct personalities. But they’ve converged to have a remarkable similarity of personality, and I think that’s the personality you get if you spend too much time on Twitter. It turns you into a little kid in a schoolyard who is both desperate for attention and afraid of being the one who gets beat up. You end up being this phoney who’s self-concerned but loses empathy for others.” It’s a brilliant analysis that returns to his original point – our responsibility to sanity. Does Lanier’s responsibility to his own sanity keep him off social media? He smiles. “I always thought social media was bullshit. It was obviously just this dumb thing from the beginning.”

There is much about the internet of which he is still proud. He says that virtual reality headsets now used are little different from those he introduced in the 1980s, and his work on surgical simulation has had huge practical benefits. “I know many people whose lives have been saved by the furtherance of this stuff I was demonstrating 40 years ago. My God! I’m so old now!” He stops to question whether he’s overstating his influence, stressing that he was only involved at the beginning. There is also huge potential, he says, for AI to help us tackle climate change, and save the planet.

But he has also seen the very worst of AI. “I know people whose kids have committed suicide with a very strong online algorithm contribution. So in those cases life was taken. It might not be possible from this one human perspective to say for sure what the giant accounting ledger would tell us now, but whatever that answer would be I’m certain we could have done better, and I’m sure we can and must do better in the future.”

Again, that word, human. The way to ensure that we are sufficiently sane to survive is to remember it’s our humanness that makes us unique, he says. “A lot of modern enlightenment thinkers and technical people feel that there is something old-fashioned about believing that people are special – for instance that consciousness is a thing. They tend to think there is an equivalence between what a computer could be and what a human brain could be.” Lanier has no truck with this. “We have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?”

Lanier looks at his watch, and apologises. “You know what, I actually have to go to a dentist’s appointment.” The real world intervenes and asserts its supremacy over the virtual. Artificial intelligence isn’t going to fix his teeth, and he wouldn’t have it any other way.


muyuu

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 2210
  • -Recibidas: 7661
  • Mensajes: 1842
  • Nivel: 176
  • muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.muyuu Sus opiniones inspiran a los demás.
  • el mercado es tu amigo ☜(゚ヮ゚☜)
    • Ver Perfil
Re:AGI
« Respuesta #58 en: Marzo 25, 2023, 10:35:34 am »
https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html

Citar
Hello Dolly: Democratizing the magic of ChatGPT with open models

Citar
Summary

We show that anyone can take a dated off-the-shelf open source large language model (LLM) and give it magical ChatGPT-like instruction following ability by training it in less than three hours on one machine, using high-quality training data. Surprisingly, instruction-following does not seem to require the latest or largest models: our model is only 6 billion parameters, compared to 175 billion for GPT-3. We open source the code for our model (Dolly) and show how it can be re-created on Databricks. We believe models like Dolly will help democratize LLMs, transforming them from something very few companies can afford into a commodity every company can own and customize to improve their products.

Background

ChatGPT, a proprietary instruction-following model, was released in November 2022 and took the world by storm. The model was trained on trillions of words from the web, requiring massive numbers of GPUs to develop. This quickly led to Google and other companies releasing their own proprietary instruction-following models. In February 2023, Meta released the weights for a set of high-quality (but not instruction-following) language models called LLaMA to academic researchers, trained for over 80,000 GPU-hours each. Then, in March, Stanford built the Alpaca model, which was based on LLaMA, but tuned on a small dataset of 50,000 human-like questions and answers that, surprisingly, made it exhibit ChatGPT-like interactivity.

Introducing Dolly

Today we are introducing Dolly, a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly works by taking an existing open source 6 billion parameter model from EleutherAI and modifying it ever so slightly to elicit instruction following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.

The model underlying Dolly only has 6 billion parameters, compared to 175 billion in GPT-3, and is two years old, making it particularly surprising that it works so well. This suggests that much of the qualitative gains in state-of-the-art models like ChatGPT may owe to focused corpuses of instruction-following training data, rather than larger or better-tuned base models. We’re calling the model Dolly — after Dolly the sheep, the first cloned mammal — because it's an open source clone of an Alpaca, inspired by a LLaMA. We’re in the earliest days of the democratization of AI for the enterprise, and much work remains to be done, but we believe the technology underlying Dolly represents an exciting new opportunity for companies that want to cheaply build their own instruction-following models.

We evaluated Dolly on the instruction-following capabilities described in the InstructGPT paper that ChatGPT is based on and found that it exhibits many of the same qualitative capabilities, including text generation, brainstorming and open Q&A. Of particular note in these examples is not the quality of the generated text, but rather the vast improvement in instruction-following capability that results from fine tuning a years-old open source model on a small, high quality dataset.

[...] (ejemplos de generación)

Citar
Why Open Models?

There are many reasons a company would prefer to build their own model rather than sending data to a centralized LLM provider that serves a proprietary model behind an API. For many companies, the problems and datasets most likely to benefit from AI represent their most sensitive and proprietary intellectual property, and handing it over to a third party may be unpalatable. Furthermore, organizations may have different tradeoffs in terms of model quality, cost, and desired behavior. We believe that most ML users are best served long term by directly owning their models.

We are open sourcing a simple Databricks notebook that you can use to build Dolly yourself on Databricks. Contact us at hello-dolly@databricks.com if you would like to get access to the trained weights.

What’s Next?

The release of Dolly is the first in a series of announcements Databricks is making that focus on helping every organization harness the power of large language models. We believe in the incredible power of artificial intelligence to transform the productivity of every organization and individual, and welcome you to join us on this journey. Stay tuned for more in this area in the coming weeks!

Acknowledgments

This work owes much to the efforts and insights of many incredible organizations. This would have been impossible without EleutherAI open sourcing and training GPT-J. We are inspired by the incredible ideas and data from the Stanford Center for Research on Foundation Models and specifically the team behind Alpaca. The core idea behind the outsized power of small dataset is thanks to the original paper on Self-Instruct. We are also thankful to Hugging Face for hosting, open sourcing, and maintaining countless models and libraries; their contribution to the state of the art cannot be overstated.

-----

Disclaimer: Generative AI is an emerging technology and we're in the early stages of research around how to address factual accuracy, bias, offensive responses, general toxicity, and hallucinations in LLMs. Dolly, like other language models, can sometimes exhibit these behaviors and we urge our users to exercise good judgment in designing applications of this technology.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18985
  • -Recibidas: 41053
  • Mensajes: 8417
  • Nivel: 509
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:AGI
« Respuesta #59 en: Marzo 25, 2023, 21:51:54 pm »
Citar
ChatGPT: The AI bot taking the tech world by storm

2022.12.09


ChartGPT

‍On Wednesday the Chartr office party was in full swing, but instead of heading for drinks — as originally planned — we found ourselves still in the office, writing increasingly funny prompts into ChatGPT, a chatbot from OpenAI.

Built on the architecture of GPT-3, with some ~175 billion parameters, the key innovation of ChatGPT relative to other AI breakthroughs is that it’s super easy to interact with. You type something, it spits something back to you. “Tell me a joke”, “write a recipe for pecan pie in the style of a pirate”, “explain long division to a ten-year-old”... ChatGPT has a — pretty convincing — response for all.

That functionality has gone viral, with OpenAI reporting that ChatGPT had hit 1 million users in just 5 days. Searches for ChatGPT rocketed, surpassing those for “lensa”, another AI app making waves this week, but ChatGPT is undoubtedly the much, much bigger story.

The possible uses for ChatGPT, and the future versions of it to come, are equal-parts exciting and daunting. If optimized further, it’s not hard to see a way it could threaten the dominance of search giant Google for certain queries, with the potential to revolutionize some manual tasks in nearly every knowledge-based industry. It is, however, currently pretty expensive. CEO Sam Altman has reported that each individual chat currently costs “single-digit cents” to run — a cost that stacks up pretty quickly.

Ethical concerns and worries over lost jobs due to automation are very real — and although there are guard-rails in place, designed to limit the chance the tool spews hateful content, some have exposed some unfortunate responses.


Saludos.
« última modificación: Marzo 25, 2023, 22:06:57 pm por Cadavre Exquis »

Tags:
 


SimplePortal 2.3.3 © 2008-2010, SimplePortal