* Blog


* Últimos mensajes


* Temas mas recientes


Autor Tema: STEM  (Leído 99701 veces)

0 Usuarios y 3 Visitantes están viendo este tema.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18981
  • -Recibidas: 41026
  • Mensajes: 8404
  • Nivel: 508
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #240 en: Diciembre 06, 2022, 21:15:02 pm »
El artículo es de finales de mayo de 2021, pero parece que hubiese transcurrido una eternidad en relación con los modelos que han surgido en apenas un año y medio.


Citar
AI is learning how to create itself
Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves.

by Will Douglas Heavenarchive page May 27, 2021


A little stick figure with a wedge-shaped head shuffles across the screen. It moves in a half crouch, dragging one knee along the ground. It’s walking! Er, sort of.

Yet Rui Wang is delighted. “Every day I walk into my office and open my computer, and I don’t know what to expect,” he says.

An artificial-intelligence researcher at Uber, Wang likes to leave the Paired Open-Ended Trailblazer, a piece of software he helped develop, running on his laptop overnight. POET is a kind of training dojo for virtual bots. So far, they aren’t learning to do much at all. These AI agents are not playing Go, spotting signs of cancer, or folding proteins—they’re trying to navigate a crude cartoon landscape of fences and ravines without falling over.


But it’s not what the bots are learning that’s exciting—it’s how they’re learning. POET generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. “At some point it might jump over a cliff like a kung fu master,” says Wang.

It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself.

Wang’s former colleague Jeff Clune is among the biggest boosters of this idea. Clune has been working on it for years, first at the University of Wyoming and then at Uber AI Labs, where he worked with Wang and others. Now dividing his time between the University of British Columbia and OpenAI, he has the backing of one of the world’s top artificial-intelligence labs.

Clune calls the attempt to build truly intelligent AI the most ambitious scientific quest in human history. Today, seven decades after serious efforts to make AI began, we’re still a long way from creating machines that are anywhere near as smart as humans, let alone smarter. Clune thinks POET might point to a shortcut.

“We need to take the shackles off and get out of our own way,” he says.

If Clune is right, using AI to make AI could be an important step on the road that one day leads to artificial general intelligence (AGI)—machines that can outthink humans. In the nearer term, the technique might also help us discover different kinds of intelligence: non-human smarts that can find solutions in unexpected ways and perhaps complement our own intelligence rather than replace it.

Mimicking evolution
I first spoke to Clune about the idea early last year, just a few weeks after his move to OpenAI. He was happy to discuss past work but remained tight-lipped on what he was doing with his new team. Instead of taking the call inside, he preferred to walk up and down the streets outside the offices as we talked.

All Clune would say was that OpenAI was a good fit. “My idea is very much in line with many of the things that they believe,” he says. “It was kind of a marriage made in heaven. They liked the vision and wanted me to come here and pursue it.” A few months after Clune joined, OpenAI hired most of his old Uber team as well.

Clune’s ambitious vision is grounded by more than OpenAI’s investment. The history of AI is filled with examples in which human-designed solutions gave way to machine-learned ones. Take computer vision: a decade ago, the big breakthrough in image recognition came when existing hand-crafted systems were replaced by ones that taught themselves from scratch. It’s the same for many AI successes.

One of the fascinating things about AI, and machine learning in particular, is its ability to find solutions that humans haven’t found—to surprise us. An oft-cited example is AlphaGo (and its successor AlphaZero), which beat the best humanity has to offer at the ancient, beguiling game of Go by employing seemingly alien strategies. After hundreds of years of study by human masters, AI found solutions no one had ever thought of.

Clune is now working with a team at OpenAI that developed bots that learned to play hide and seek in a virtual environment in 2018. These AIs started off with simple goals and simple tools to achieve them: one pair had to find the other, which could hide behind movable obstacles. Yet when these bots were let loose to learn, they soon found ways to take advantage of their environment in ways the researchers had not foreseen. They exploited glitches in the simulated physics of their virtual world to jump over and even pass through walls.

Those kinds of unexpected emergent behaviors offer tantalizing hints that AI might arrive at technical solutions humans would not think of by themselves, inventing new and more efficient types of algorithms or neural networks—or even ditching neural networks, a cornerstone of modern AI, entirely.

Clune likes to remind people that intelligence has already emerged from simple beginnings. “What’s interesting about this approach is that we know it can work,” he says. “The very simple algorithm of Darwinian evolution produced your brain, and your brain is the most intelligent learning algorithm in the universe that we know so far.” His point is that if intelligence as we know it resulted from the mindless mutation of genes over countless generations, why not seek to replicate the intelligence-producing process—which is arguably simpler—rather than intelligence itself?

But there’s another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of them—as means to an end.

It’s this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolution—and sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the field’s hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence. “We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI,” Clune says.

The truth is that for now, AGI remains a fantasy. But that’s largely because nobody knows how to make it. Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what you’re looking for or how many blocks you’ll need. And that’s just the start. “At some point, we have to take on the Herculean task of putting them all together,” he says.

Asking AI to find and assemble those building blocks for us is a paradigm shift. It’s saying we want to create an intelligent machine, but we don’t care what it might look like—just give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needs more than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. “Then maybe it learns math puzzles and starts inventing its own challenges,” he says. “The system continuously innovates, and the sky’s the limit in terms of where it might go.”

It’s wild speculation, perhaps, but one hope is that machines like this might be able to evade our conceptual dead ends, helping us unpick vastly complex crises such as climate change or global health.

But first we have to make one.

How to create a brain
There are many different ways to wire up an artificial brain.

Neural networks are made from multiple layers of artificial neurons encoded in software. Each neuron can be connected to others in the layers above. The way a neural network is wired makes a big difference, and new architectures often lead to new breakthroughs.

The neural networks coded by human scientists are often the result of trial and error. There is little theory to what does and doesn’t work, and no guarantee that the best designs have been found. That’s why automating the hunt for better neural-network designs has been one of the hottest topics in AI since at least the 1980s. The most common way to automate the process is to let an AI generate many possible network designs, and let the network automatically try each of them and choose the best ones. This is commonly known as neuro-evolution or neural architecture search (NAS).

In the last few years, these machine designs have started to outstrip human ones. In 2018, Esteban Real and his colleagues at Google used NAS to generate a neural network for image recognition that beat the best human-designed networks at the time. That was an eye-opener.

The 2018 system is part of an ongoing Google project called AutoML, which has also used NAS to produce EfficientNets, a family of deep-learning models that are more efficient than human-designed ones, achieving high levels of accuracy on image-recognition tasks with smaller, faster models.

Three years on, Real is pushing the boundaries of what can be generated from scratch. The earlier systems just rearranged tried and tested neural-network pieces, such as existing types of layers or components. “We could expect a good answer,” he says.

Last year Real and his team took the training wheels off. The new system, called AutoML Zero, tries to build an AI from the ground up using nothing but the most basic mathematical concepts that govern machine learning.

Amazingly, not only did AutoML Zero spontaneously build a neural network, but it came up with gradient descent, the most common mathematical technique that human designers use to train a network. “I was quite surprised,” says Real. “It’s a very simple algorithm—it takes like six lines of code—but it wrote the exact six lines.”

AutoML Zero is not yet generating architectures that rival the performance of human-designed systems—or indeed doing much that a human designer would not have done. But Real believes it could one day.

Time to train a new kind of teacher
First you make a brain; then you have to teach it. But machine brains don’t learn the way ours do. Our brains are fantastic at adapting to new environments and new tasks. Today’s AIs can solve challenges under certain conditions but fail when those conditions change even a little. This inflexibility is hampering the quest to create more generalizable AI that can be useful across a wide range of scenarios, which would be a big step toward making them truly intelligent.

For Jane Wang, a researcher at DeepMind in London, the best way to make AI more flexible is to get it to learn that trait itself. In other words, she wants to build an AI that not only learns specific tasks but learns to learn those tasks in ways that can be adapted to fresh situations.

Researchers have been trying to make AI more adaptable for years. Wang thinks that getting AI to work through this problem for itself avoids some of the trial and error of a hand-designed approach: “We can’t possibly expect to stumble upon the right answer right away.” In the process, she hopes, we will also learn more about how brains work. “There’s still so much we don’t understand about the way that humans and animals learn,” she says.

There are two main approaches to generating learning algorithms automatically, but both start with an existing neural network and use AI to teach it.

The first approach, invented separately by Wang and her colleagues at DeepMind and by a team at OpenAI/url] at around the same time, uses recurrent neural networks. This type of network can be trained in such a way that the activations of their neurons—roughly akin to the firing of neurons in biological brains—encode any type of algorithm. DeepMind and OpenAI took advantage of this to train a recurrent neural network to generate reinforcement-learning algorithms, which tell an AI how to behave to achieve given goals.

The upshot is that the DeepMind and OpenAI systems do not learn an algorithm that solve a specific challenge, such as recognizing images, but learn a learning algorithm that can be applied to multiple tasks and adapt as it goes. It’s like the old adage about teaching someone to fish: whereas a hand-designed algorithm can learn a particular task, these AIs are being made to learn how to learn by themselves. And some of them are performing better than human-designed ones.

The second approach comes from Chelsea Finn at the University of California, Berkeley, and her colleagues. Called [url=https://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/]model-agnostic meta-learning
, or MAML, it trains a model using two machine-learning processes, one nested inside the other.

Roughly, here’s how it works. The inner process in MAML is trained on data and then tested—as usual. But then the outer model takes the performance of the inner model—how well it identifies images, say—and uses it to learn how to adjust that model’s learning algorithm to boost performance. It’s as if you had an school inspector watching over a bunch of teachers, each offering different learning techniques. The inspector checks which techniques help the students get the best scores and tweaks them accordingly.

Through these approaches, researchers are building AI that is more robust, more generalized, and able to learn faster with less data. For example, Finn wants a robot that has learned to walk on flat ground to be able to transition, with minimal extra training, to walking on a slope or on grass or while carrying a load.

Last year, Clune and his colleagues extended Finn’s technique to design an algorithm that learns using fewer neurons so that it does not overwrite everything it has learned previously, a big unsolved problem in machine learning known as catastrophic forgetting. A trained model that uses fewer neurons, known as a “sparse” model, will have more unused neurons left over to dedicate to new tasks when retrained, which means that fewer of the “used” neurons will get overwritten. Clune found that setting his AI the challenge of learning more than one task led it to come up with its own version of a sparse model that outperformed human-designed ones.

If we’re going all in on letting AI create and teach itself, then AIs should generate their own training environments, too—the schools and textbooks, as well as the lesson plans.

And the past year has seen a raft of projects in which AI has been trained on automatically generated data. Face-recognition systems are being trained with AI-generated faces, for example. AIs are also learning how to train each other. In one recent example, two robot arms worked together, with one arm learning to set tougher and tougher block-stacking challenges that trained the other to grip and grasp objects.

In fact, Clune wonders if human intuition about what kind of data an AI needs in order to learn may be off. For example, he and his colleagues have developed what he calls generative teaching networks, which learn what data they should generate to get the best results when training a model. In one experiment, he used one of these networks to adapt a data set of handwritten numbers that’s often used to train image-recognition algorithms. What it came up with looked very different from the original human-curated data set: hundreds of not-quite digits, such as the top half of the figure seven or what looked like two digits merged together. Some AI-generated examples were hard to decipher at all. Despite this, the AI-generated data still did a great job at training the handwriting recognition system to identify actual digits.

Don’t try to succeed
AI-generated data is still just a part of the puzzle. The long-term vision is to take all these techniques—and others not yet invented—and hand them over to an AI trainer that controls how artificial brains are wired, how they are trained, and what they are trained on. Even Clune is not clear on what such a future system would look like. Sometimes he talks about a kind of hyper-realistic simulated sandbox, where AIs can cut their teeth and skin their virtual knees. Something that complex is still years away. The closest thing yet is POET, the system Clune created with Uber’s Rui Wang and others.

POET was motivated by a paradox, says Wang. If you try to solve a problem you’ll fail; if you don’t try to solve it you’re more likely to succeed. This is one of the insights Clune takes from his analogy with evolution—amazing results that emerge from an apparently random process often cannot be re-created by taking deliberate steps toward the same end. There’s no doubt that butterflies exist, but rewind to their single-celled precursors and try to create them from scratch by choosing each step from bacterium to bug, and you’d likely fail.

POET starts its two-legged agent off in a simple environment, such as a flat path without obstacles. At first the agent doesn’t know what to do with its legs and cannot walk. But through trial and error, the reinforcement-learning algorithm controlling it learns how to move along flat ground. POET then generates a new random environment that’s different, but not necessarily harder to move in. The agent tries walking there. If there are obstacles in this new environment, the agent learns how to get over or across those. Every time an agent succeeds or gets stuck, it is moved to a new environment. Over time, the agents learn a range of walking and jumping actions that let them navigate harder and harder obstacle courses.

The team found that random switching of environments was essential.

For example, agents sometimes learned to walk on flat ground with a weird, half-kneeling shuffle, because that was good enough. “They never learn to stand up because they never need to,” says Wang. But after they had been forced to learn alternative strategies on obstacle-strewn ground, they could return to the early stage with a better way of walking—using both legs instead of dragging one behind, say—and then take that improved version of itself forward to harder challenges.

POET trains its bots in a way that no human would—it takes erratic, unintuitive paths to success. At each stage, the bots try to figure out a solution to whatever challenge they are presented with. By coping with a random selection of obstacles thrown their way, they get better overall. But there is no end point to this process, no ultimate test to pass or high score to beat.

Clune, Wang, and a number of their colleagues believe this is a profound insight. They are now exploring what it might mean for the development of supersmart machines. Could trying not to chart a specific path actually be a key breakthrough on the way to artificial general intelligence?

POET is already inspiring other researchers, such as Natasha Jaques and Michael Dennis at the University of California, Berkeley. They’ve developed a system called PAIRED that uses AI to generate a series of mazes to train another AI to navigate them.

Rui Wang thinks human-designed challenges are going to be a bottleneck and that real progress in AI will require AI to come up with its own. “No matter how good algorithms are today, they are always tested on some hand-designed benchmark,” he says. “It’s very hard to imagine artificial general intelligence coming from this, because it is bound by fixed goals.”

A new kind of intelligence
The rapid development of AI that can train itself also raises questions about how well we can control its growth. The idea of AI that builds better AI is a crucial part of the myth-making behind the “Singularity,” the imagined point in the future when AIs start to improve at an exponential rate and move beyond our control. Eventually, certain doomsayers warn, AI might decide it doesn’t need humans at all.

That’s not what any of these researchers have in mind: their work is very much focused on making today’s AI better. Machines that run amok remain a far-off anti-fantasy.

Even so, DeepMind’s Jane Wang has reservations. A big part of the attraction of using AI to make AI is that it can come up with designs and techniques that people hadn’t thought of. Yet Wang notes that not all surprises are good surprises: “Open-endedness is, by definition, something that’s unexpected.” If the whole idea is to get AI to do something you didn’t anticipate, it becomes harder to control. “That’s both exciting and scary,” she says.

Clune also stresses the importance of thinking about the ethics of the new technology from the start. There is a good chance that AI-designed neural networks and algorithms will be even harder to understand than today’s already opaque black-box systems. Are AIs generated by algorithms harder to audit for bias? Is it harder to guarantee that they will not behave in undesirable ways?

Clune hopes such questions will be asked and answered as more people realize the potential of self-generating AIs. “Most people in the machine-learning community don’t ever really talk about our overall path to extremely powerful AI,” he says—instead, they tend to focus on small, incremental improvements. Clune wants to start a conversation about the field’s biggest ambitions again.

His own ambitions tie back into his early interests in human intelligence and how it evolved. His grand vision is to set things up so that machines might one day see their own intelligence—or intelligences—emerge and improve through countless generations of trial and error, guided by algorithms with no ultimate blueprint in mind.

If AI starts to generate intelligence by itself, there’s no guarantee that it will be human-like. Rather than humans teaching machines to think like humans, machines might teach humans new ways of thinking.

“There’s probably a vast number of different ways to be very intelligent,” says Clune. “One of the things that excite me about AI is that we might come to understand intelligence more generally, by seeing what variation is possible.

“I think that’s fascinating. I mean, it’s almost like inventing interstellar travel and being able to go visit alien cultures. There would be no greater moment in the history of humankind than encountering an alien race and learning about its culture, its science, everything. Interstellar travel is exceedingly difficult, but we have the ability to potentially create alien intelligences digitally.”
Saludos.

puede ser

  • Espectador
  • ***
  • Gracias
  • -Dadas: 20004
  • -Recibidas: 10409
  • Mensajes: 1493
  • Nivel: 155
  • puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.puede ser Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #241 en: Diciembre 06, 2022, 23:12:34 pm »
https://www.zdnet.com/article/stack-overflow-temporarily-bans-answers-from-openais-chatgpt-chatbot/
Citar
Stack Overflow prohíbe temporalmente las respuestas del chatbot ChatGPT de OpenAI

El sitio de preguntas y respuestas se ha inundado con respuestas de codificación de ChatGPT que parecen correctas pero a menudo no lo son, y los moderadores piden que se detenga.

Este puente mucha gente estuvo pasando el rato poniendo en aprietos al chat y al parecer no quedó muy bien. A la IA que intenta aprender de nosotros se le debería enseñar primordialmente la primera ley de la estupidez de Cipolla  :roto2:


pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 26727
  • -Recibidas: 29331
  • Mensajes: 3438
  • Nivel: 460
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #242 en: Diciembre 07, 2022, 14:03:21 pm »
El artículo es de finales de mayo de 2021, pero parece que hubiese transcurrido una eternidad en relación con los modelos que han surgido en apenas un año y medio.


Citar
AI is learning how to create itself
Humans have struggled to make truly intelligent machines. Maybe we need to let them get on with it themselves.

by Will Douglas Heavenarchive page May 27, 2021


A little stick figure with a wedge-shaped head shuffles across the screen. It moves in a half crouch, dragging one knee along the ground. It’s walking! Er, sort of.

Yet Rui Wang is delighted. “Every day I walk into my office and open my computer, and I don’t know what to expect,” he says.

An artificial-intelligence researcher at Uber, Wang likes to leave the Paired Open-Ended Trailblazer, a piece of software he helped develop, running on his laptop overnight. POET is a kind of training dojo for virtual bots. So far, they aren’t learning to do much at all. These AI agents are not playing Go, spotting signs of cancer, or folding proteins—they’re trying to navigate a crude cartoon landscape of fences and ravines without falling over.


But it’s not what the bots are learning that’s exciting—it’s how they’re learning. POET generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. “At some point it might jump over a cliff like a kung fu master,” says Wang.

It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself.

Wang’s former colleague Jeff Clune is among the biggest boosters of this idea. Clune has been working on it for years, first at the University of Wyoming and then at Uber AI Labs, where he worked with Wang and others. Now dividing his time between the University of British Columbia and OpenAI, he has the backing of one of the world’s top artificial-intelligence labs.

Clune calls the attempt to build truly intelligent AI the most ambitious scientific quest in human history. Today, seven decades after serious efforts to make AI began, we’re still a long way from creating machines that are anywhere near as smart as humans, let alone smarter. Clune thinks POET might point to a shortcut.

“We need to take the shackles off and get out of our own way,” he says.

If Clune is right, using AI to make AI could be an important step on the road that one day leads to artificial general intelligence (AGI)—machines that can outthink humans. In the nearer term, the technique might also help us discover different kinds of intelligence: non-human smarts that can find solutions in unexpected ways and perhaps complement our own intelligence rather than replace it.

Mimicking evolution
I first spoke to Clune about the idea early last year, just a few weeks after his move to OpenAI. He was happy to discuss past work but remained tight-lipped on what he was doing with his new team. Instead of taking the call inside, he preferred to walk up and down the streets outside the offices as we talked.

All Clune would say was that OpenAI was a good fit. “My idea is very much in line with many of the things that they believe,” he says. “It was kind of a marriage made in heaven. They liked the vision and wanted me to come here and pursue it.” A few months after Clune joined, OpenAI hired most of his old Uber team as well.

Clune’s ambitious vision is grounded by more than OpenAI’s investment. The history of AI is filled with examples in which human-designed solutions gave way to machine-learned ones. Take computer vision: a decade ago, the big breakthrough in image recognition came when existing hand-crafted systems were replaced by ones that taught themselves from scratch. It’s the same for many AI successes.

One of the fascinating things about AI, and machine learning in particular, is its ability to find solutions that humans haven’t found—to surprise us. An oft-cited example is AlphaGo (and its successor AlphaZero), which beat the best humanity has to offer at the ancient, beguiling game of Go by employing seemingly alien strategies. After hundreds of years of study by human masters, AI found solutions no one had ever thought of.

Clune is now working with a team at OpenAI that developed bots that learned to play hide and seek in a virtual environment in 2018. These AIs started off with simple goals and simple tools to achieve them: one pair had to find the other, which could hide behind movable obstacles. Yet when these bots were let loose to learn, they soon found ways to take advantage of their environment in ways the researchers had not foreseen. They exploited glitches in the simulated physics of their virtual world to jump over and even pass through walls.

Those kinds of unexpected emergent behaviors offer tantalizing hints that AI might arrive at technical solutions humans would not think of by themselves, inventing new and more efficient types of algorithms or neural networks—or even ditching neural networks, a cornerstone of modern AI, entirely.

Clune likes to remind people that intelligence has already emerged from simple beginnings. “What’s interesting about this approach is that we know it can work,” he says. “The very simple algorithm of Darwinian evolution produced your brain, and your brain is the most intelligent learning algorithm in the universe that we know so far.” His point is that if intelligence as we know it resulted from the mindless mutation of genes over countless generations, why not seek to replicate the intelligence-producing process—which is arguably simpler—rather than intelligence itself?

But there’s another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of them—as means to an end.

It’s this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolution—and sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the field’s hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence. “We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI,” Clune says.

The truth is that for now, AGI remains a fantasy. But that’s largely because nobody knows how to make it. Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what you’re looking for or how many blocks you’ll need. And that’s just the start. “At some point, we have to take on the Herculean task of putting them all together,” he says.

Asking AI to find and assemble those building blocks for us is a paradigm shift. It’s saying we want to create an intelligent machine, but we don’t care what it might look like—just give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needs more than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. “Then maybe it learns math puzzles and starts inventing its own challenges,” he says. “The system continuously innovates, and the sky’s the limit in terms of where it might go.”

It’s wild speculation, perhaps, but one hope is that machines like this might be able to evade our conceptual dead ends, helping us unpick vastly complex crises such as climate change or global health.

But first we have to make one.

How to create a brain
There are many different ways to wire up an artificial brain.

Neural networks are made from multiple layers of artificial neurons encoded in software. Each neuron can be connected to others in the layers above. The way a neural network is wired makes a big difference, and new architectures often lead to new breakthroughs.

The neural networks coded by human scientists are often the result of trial and error. There is little theory to what does and doesn’t work, and no guarantee that the best designs have been found. That’s why automating the hunt for better neural-network designs has been one of the hottest topics in AI since at least the 1980s. The most common way to automate the process is to let an AI generate many possible network designs, and let the network automatically try each of them and choose the best ones. This is commonly known as neuro-evolution or neural architecture search (NAS).

In the last few years, these machine designs have started to outstrip human ones. In 2018, Esteban Real and his colleagues at Google used NAS to generate a neural network for image recognition that beat the best human-designed networks at the time. That was an eye-opener.

The 2018 system is part of an ongoing Google project called AutoML, which has also used NAS to produce EfficientNets, a family of deep-learning models that are more efficient than human-designed ones, achieving high levels of accuracy on image-recognition tasks with smaller, faster models.

Three years on, Real is pushing the boundaries of what can be generated from scratch. The earlier systems just rearranged tried and tested neural-network pieces, such as existing types of layers or components. “We could expect a good answer,” he says.

Last year Real and his team took the training wheels off. The new system, called AutoML Zero, tries to build an AI from the ground up using nothing but the most basic mathematical concepts that govern machine learning.

Amazingly, not only did AutoML Zero spontaneously build a neural network, but it came up with gradient descent, the most common mathematical technique that human designers use to train a network. “I was quite surprised,” says Real. “It’s a very simple algorithm—it takes like six lines of code—but it wrote the exact six lines.”

AutoML Zero is not yet generating architectures that rival the performance of human-designed systems—or indeed doing much that a human designer would not have done. But Real believes it could one day.

Time to train a new kind of teacher
First you make a brain; then you have to teach it. But machine brains don’t learn the way ours do. Our brains are fantastic at adapting to new environments and new tasks. Today’s AIs can solve challenges under certain conditions but fail when those conditions change even a little. This inflexibility is hampering the quest to create more generalizable AI that can be useful across a wide range of scenarios, which would be a big step toward making them truly intelligent.

For Jane Wang, a researcher at DeepMind in London, the best way to make AI more flexible is to get it to learn that trait itself. In other words, she wants to build an AI that not only learns specific tasks but learns to learn those tasks in ways that can be adapted to fresh situations.

Researchers have been trying to make AI more adaptable for years. Wang thinks that getting AI to work through this problem for itself avoids some of the trial and error of a hand-designed approach: “We can’t possibly expect to stumble upon the right answer right away.” In the process, she hopes, we will also learn more about how brains work. “There’s still so much we don’t understand about the way that humans and animals learn,” she says.

There are two main approaches to generating learning algorithms automatically, but both start with an existing neural network and use AI to teach it.

The first approach, invented separately by Wang and her colleagues at DeepMind and by a team at OpenAI/url] at around the same time, uses recurrent neural networks. This type of network can be trained in such a way that the activations of their neurons—roughly akin to the firing of neurons in biological brains—encode any type of algorithm. DeepMind and OpenAI took advantage of this to train a recurrent neural network to generate reinforcement-learning algorithms, which tell an AI how to behave to achieve given goals.

The upshot is that the DeepMind and OpenAI systems do not learn an algorithm that solve a specific challenge, such as recognizing images, but learn a learning algorithm that can be applied to multiple tasks and adapt as it goes. It’s like the old adage about teaching someone to fish: whereas a hand-designed algorithm can learn a particular task, these AIs are being made to learn how to learn by themselves. And some of them are performing better than human-designed ones.

The second approach comes from Chelsea Finn at the University of California, Berkeley, and her colleagues. Called [url=https://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/]model-agnostic meta-learning
, or MAML, it trains a model using two machine-learning processes, one nested inside the other.

Roughly, here’s how it works. The inner process in MAML is trained on data and then tested—as usual. But then the outer model takes the performance of the inner model—how well it identifies images, say—and uses it to learn how to adjust that model’s learning algorithm to boost performance. It’s as if you had an school inspector watching over a bunch of teachers, each offering different learning techniques. The inspector checks which techniques help the students get the best scores and tweaks them accordingly.

Through these approaches, researchers are building AI that is more robust, more generalized, and able to learn faster with less data. For example, Finn wants a robot that has learned to walk on flat ground to be able to transition, with minimal extra training, to walking on a slope or on grass or while carrying a load.

Last year, Clune and his colleagues extended Finn’s technique to design an algorithm that learns using fewer neurons so that it does not overwrite everything it has learned previously, a big unsolved problem in machine learning known as catastrophic forgetting. A trained model that uses fewer neurons, known as a “sparse” model, will have more unused neurons left over to dedicate to new tasks when retrained, which means that fewer of the “used” neurons will get overwritten. Clune found that setting his AI the challenge of learning more than one task led it to come up with its own version of a sparse model that outperformed human-designed ones.

If we’re going all in on letting AI create and teach itself, then AIs should generate their own training environments, too—the schools and textbooks, as well as the lesson plans.

And the past year has seen a raft of projects in which AI has been trained on automatically generated data. Face-recognition systems are being trained with AI-generated faces, for example. AIs are also learning how to train each other. In one recent example, two robot arms worked together, with one arm learning to set tougher and tougher block-stacking challenges that trained the other to grip and grasp objects.

In fact, Clune wonders if human intuition about what kind of data an AI needs in order to learn may be off. For example, he and his colleagues have developed what he calls generative teaching networks, which learn what data they should generate to get the best results when training a model. In one experiment, he used one of these networks to adapt a data set of handwritten numbers that’s often used to train image-recognition algorithms. What it came up with looked very different from the original human-curated data set: hundreds of not-quite digits, such as the top half of the figure seven or what looked like two digits merged together. Some AI-generated examples were hard to decipher at all. Despite this, the AI-generated data still did a great job at training the handwriting recognition system to identify actual digits.

Don’t try to succeed
AI-generated data is still just a part of the puzzle. The long-term vision is to take all these techniques—and others not yet invented—and hand them over to an AI trainer that controls how artificial brains are wired, how they are trained, and what they are trained on. Even Clune is not clear on what such a future system would look like. Sometimes he talks about a kind of hyper-realistic simulated sandbox, where AIs can cut their teeth and skin their virtual knees. Something that complex is still years away. The closest thing yet is POET, the system Clune created with Uber’s Rui Wang and others.

POET was motivated by a paradox, says Wang. If you try to solve a problem you’ll fail; if you don’t try to solve it you’re more likely to succeed. This is one of the insights Clune takes from his analogy with evolution—amazing results that emerge from an apparently random process often cannot be re-created by taking deliberate steps toward the same end. There’s no doubt that butterflies exist, but rewind to their single-celled precursors and try to create them from scratch by choosing each step from bacterium to bug, and you’d likely fail.

POET starts its two-legged agent off in a simple environment, such as a flat path without obstacles. At first the agent doesn’t know what to do with its legs and cannot walk. But through trial and error, the reinforcement-learning algorithm controlling it learns how to move along flat ground. POET then generates a new random environment that’s different, but not necessarily harder to move in. The agent tries walking there. If there are obstacles in this new environment, the agent learns how to get over or across those. Every time an agent succeeds or gets stuck, it is moved to a new environment. Over time, the agents learn a range of walking and jumping actions that let them navigate harder and harder obstacle courses.

The team found that random switching of environments was essential.

For example, agents sometimes learned to walk on flat ground with a weird, half-kneeling shuffle, because that was good enough. “They never learn to stand up because they never need to,” says Wang. But after they had been forced to learn alternative strategies on obstacle-strewn ground, they could return to the early stage with a better way of walking—using both legs instead of dragging one behind, say—and then take that improved version of itself forward to harder challenges.

POET trains its bots in a way that no human would—it takes erratic, unintuitive paths to success. At each stage, the bots try to figure out a solution to whatever challenge they are presented with. By coping with a random selection of obstacles thrown their way, they get better overall. But there is no end point to this process, no ultimate test to pass or high score to beat.

Clune, Wang, and a number of their colleagues believe this is a profound insight. They are now exploring what it might mean for the development of supersmart machines. Could trying not to chart a specific path actually be a key breakthrough on the way to artificial general intelligence?

POET is already inspiring other researchers, such as Natasha Jaques and Michael Dennis at the University of California, Berkeley. They’ve developed a system called PAIRED that uses AI to generate a series of mazes to train another AI to navigate them.

Rui Wang thinks human-designed challenges are going to be a bottleneck and that real progress in AI will require AI to come up with its own. “No matter how good algorithms are today, they are always tested on some hand-designed benchmark,” he says. “It’s very hard to imagine artificial general intelligence coming from this, because it is bound by fixed goals.”

A new kind of intelligence
The rapid development of AI that can train itself also raises questions about how well we can control its growth. The idea of AI that builds better AI is a crucial part of the myth-making behind the “Singularity,” the imagined point in the future when AIs start to improve at an exponential rate and move beyond our control. Eventually, certain doomsayers warn, AI might decide it doesn’t need humans at all.

That’s not what any of these researchers have in mind: their work is very much focused on making today’s AI better. Machines that run amok remain a far-off anti-fantasy.

Even so, DeepMind’s Jane Wang has reservations. A big part of the attraction of using AI to make AI is that it can come up with designs and techniques that people hadn’t thought of. Yet Wang notes that not all surprises are good surprises: “Open-endedness is, by definition, something that’s unexpected.” If the whole idea is to get AI to do something you didn’t anticipate, it becomes harder to control. “That’s both exciting and scary,” she says.

Clune also stresses the importance of thinking about the ethics of the new technology from the start. There is a good chance that AI-designed neural networks and algorithms will be even harder to understand than today’s already opaque black-box systems. Are AIs generated by algorithms harder to audit for bias? Is it harder to guarantee that they will not behave in undesirable ways?

Clune hopes such questions will be asked and answered as more people realize the potential of self-generating AIs. “Most people in the machine-learning community don’t ever really talk about our overall path to extremely powerful AI,” he says—instead, they tend to focus on small, incremental improvements. Clune wants to start a conversation about the field’s biggest ambitions again.

His own ambitions tie back into his early interests in human intelligence and how it evolved. His grand vision is to set things up so that machines might one day see their own intelligence—or intelligences—emerge and improve through countless generations of trial and error, guided by algorithms with no ultimate blueprint in mind.

If AI starts to generate intelligence by itself, there’s no guarantee that it will be human-like. Rather than humans teaching machines to think like humans, machines might teach humans new ways of thinking.

“There’s probably a vast number of different ways to be very intelligent,” says Clune. “One of the things that excite me about AI is that we might come to understand intelligence more generally, by seeing what variation is possible.

“I think that’s fascinating. I mean, it’s almost like inventing interstellar travel and being able to go visit alien cultures. There would be no greater moment in the history of humankind than encountering an alien race and learning about its culture, its science, everything. Interstellar travel is exceedingly difficult, but we have the ability to potentially create alien intelligences digitally.”
Saludos.
Personalmente odio cómo se mezclan realidades constatables con un montón de especulación y proyección sensacionalista vendeburras (empezando por el titular, que es falso, y obvia el trabajo de la gente de este campo).
Es una tónica común a todo el periodismo en este campo. De vez en cuando nos vienen a vender la IA general a la vuelta de la esquina, que el viaje más rápido que la luz será posible, o que casi se tiene una máquina de movimiento perpetuo. Es de vergüenza.
« última modificación: Diciembre 07, 2022, 14:06:15 pm por pollo »

Benzino Napaloni

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 875
  • -Recibidas: 16294
  • Mensajes: 1937
  • Nivel: 184
  • Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.Benzino Napaloni Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #243 en: Diciembre 07, 2022, 15:03:42 pm »
Personalmente odio cómo se mezclan realidades constatables con un montón de especulación y proyección sensacionalista vendeburras (empezando por el titular, que es falso, y obvia el trabajo de la gente de este campo).
Es una tónica común a todo el periodismo en este campo. De vez en cuando nos vienen a vender la IA general a la vuelta de la esquina, que el viaje más rápido que la luz será posible, o que casi se tiene una máquina de movimiento perpetuo. Es de vergüenza.

Por no hablar de las veces que ya se le ha pillado al vendeburras de turno con las manos en la masa. Resultados incompletos, o ad hoc, o incluso con un operador humano escondido.

https://www.zdnet.com/article/stack-overflow-temporarily-bans-answers-from-openais-chatgpt-chatbot/
Citar
Stack Overflow prohíbe temporalmente las respuestas del chatbot ChatGPT de OpenAI

El sitio de preguntas y respuestas se ha inundado con respuestas de codificación de ChatGPT que parecen correctas pero a menudo no lo son, y los moderadores piden que se detenga.

Este puente mucha gente estuvo pasando el rato poniendo en aprietos al chat y al parecer no quedó muy bien. A la IA que intenta aprender de nosotros se le debería enseñar primordialmente la primera ley de la estupidez de Cipolla  :roto2:



Précisément, mon ami, précisément. :roto2: Nunca, repito, nunca hacen una demostración completa y real. Donde cualquiera pueda hacer una prueba y buscar las cosquillas si hace falta. Siempre están recortadas, siempre están limitadas. Y las preguntas incómodas nunca se contestan, o más bien directamente ni se dejan formular.

Rocoso

  • Novatillo
  • **
  • Gracias
  • -Dadas: 3159
  • -Recibidas: 1489
  • Mensajes: 180
  • Nivel: 30
  • Rocoso Se hace notarRocoso Se hace notarRocoso Se hace notar
    • Ver Perfil
Re:STEM
« Respuesta #244 en: Diciembre 07, 2022, 19:17:05 pm »
Le hemos pedido a Stable Diffusion en NightCafé que dibuje alguna de las ideas del foro:




Como ven, se le resiste la cola-serpiente que se arrastra…

Estaremos de acuerdo en que la fealdad del producto de la alegoría se ajusta a la de la realidad que tiene por correlato.
« última modificación: Diciembre 07, 2022, 19:19:35 pm por Rocoso »

sudden and sharp

  • Administrator
  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 49518
  • -Recibidas: 59077
  • Mensajes: 9564
  • Nivel: 972
  • sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.sudden and sharp Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #245 en: Diciembre 07, 2022, 20:00:34 pm »
Que pruebe con dragones... no dejan de ser "serpientes".

saturno

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 86682
  • -Recibidas: 30546
  • Mensajes: 8287
  • Nivel: 843
  • saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.saturno Sus opiniones inspiran a los demás.
    • Ver Perfil
    • Billets philo-phynanciers crédit-consumméristes
Re:STEM
« Respuesta #246 en: Diciembre 08, 2022, 14:38:02 pm »
Alegraos, la transición estructural, por divertida, es revolucionaria.

PPCC v/eshttp://ppcc-es.blogspot

Rocoso

  • Novatillo
  • **
  • Gracias
  • -Dadas: 3159
  • -Recibidas: 1489
  • Mensajes: 180
  • Nivel: 30
  • Rocoso Se hace notarRocoso Se hace notarRocoso Se hace notar
    • Ver Perfil
Re:STEM
« Respuesta #247 en: Diciembre 08, 2022, 15:35:21 pm »
Vds. perdonen que nos vengamos arriba con este asunto de las imágenes generadas por IA. Le hemos pedido a Dall E 2, en la web de Open AI, que nos de su interpretación artística sobre el final de la burbuja.

Esta se titula "hyperealistic image of landlords suffering in despair as the housing prices bubble explodes jn 2023":



Esta otra es su versión española (“hyperealistic image of Spanish landlords suffering in despair as the housing prices bubble explodes in 2023"):



Nos ha gustado tanto que le hemos pedido variaciones sobre la obra original:



Observen como en ambos casos la burbuja es simbólicamente reinterpretada como un ente tangible, de apariencia deforme hasta lo monstruoso. Noten, además, que se presenta a los protagonistas como “los muertos verticales” de la post burbuja; ¿Nos lee?.

Finalmente, hemos querido vencer a la ¿máquina? requiriéndole una interpretación pictórica de la conexión del fin de la burbuja con la crisis del sistema financiero y el drama social que provocan (“A photorealistic image of real estate prices bubble exploding in the year 2023 in a context of financial crisis and social drama”)



No se ha dejado impresionar. Con la frialdad propia de quien se sabe por encima nos apabulla con la sencillez de su imagen de respuesta y nos inquieta con un mensaje encriptado en alguna neolengua; arcano imposible para nosotros.





« última modificación: Diciembre 08, 2022, 16:07:41 pm por Rocoso »

breades

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 17308
  • -Recibidas: 18987
  • Mensajes: 1680
  • Nivel: 318
  • breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.breades Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #248 en: Diciembre 09, 2022, 00:21:14 am »

wanderer

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 63682
  • -Recibidas: 46878
  • Mensajes: 6830
  • Nivel: 773
  • wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #249 en: Diciembre 11, 2022, 14:47:55 pm »
Clarificadora (y desmitificadora) entrevista a Gary Marcus, dónde pone en solfa la inteligencia de las AI's:

Citar
ENTREVISTA CON GARY MARCUS
Este veterano de la inteligencia artificial explica por qué ChatGPT es "peligrosamente estúpido"
ChatGPT ha generado un intenso debate entre los principales investigadores de IA. Unos consideran que es un gran avance. Otros, como Gary Marcus, creen que es como poner a "monos delante de un teclado"

https://www.elconfidencial.com/tecnologia/2022-12-11/chatgpt-openai-gary-marcus-ia-ai-inteligencia-artificial_3537495/

Lo que dice me resulta muy reminiscente del experimento mental de la habitación china:

Citar
The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.
[...]

https://plato.stanford.edu/entries/chinese-room/
"De lo que que no se puede hablar, es mejor callar" (L. Wittgenstein; Tractatus Logico-Philosophicus).

dmar

  • Estructuralista
  • ****
  • Gracias
  • -Dadas: 14100
  • -Recibidas: 14457
  • Mensajes: 1851
  • Nivel: 288
  • dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.dmar Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #250 en: Diciembre 11, 2022, 16:00:01 pm »
Sugiero humildemente que, por lo prolijo del tema, la mal llamada "IA" tenga su propio hilo.

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 26727
  • -Recibidas: 29331
  • Mensajes: 3438
  • Nivel: 460
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #251 en: Diciembre 12, 2022, 01:49:47 am »
Sugiero humildemente que, por lo prolijo del tema, la mal llamada "IA" tenga su propio hilo.
Pues no sé, me parece que pertenece a este hilo. Fragmentar las conversaciones tiene sentido en un foro con muchos usuarios activos. Aquí no me parece que sea excesivo.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18981
  • -Recibidas: 41026
  • Mensajes: 8404
  • Nivel: 508
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #252 en: Diciembre 14, 2022, 22:35:27 pm »

Citar
Sobre la ignición de la fusión en NIF anunciada a bombo y platillo
Por Francisco R. Villatoro, el 13 diciembre, 2022. Categoría(s): Ciencia • Noticias • Science ✎ 24


Hoy se ha anunciado a bombo y platillo un gran hito en energía de fusión, la primera ignición con exceso de energía de NIF (National Ignition Facility) en el LLNL (Lawrence Livermore National Laboratory), California (EEUU). El pasado 5 de diciembre sus 192 láseres inyectaron una energía de 2.05 megajulios en el hohlraum que contiene la cápsula de combustible, produciendo 3.15 megajulios y logrando Q = 1.54 durante unos pocos nanosegundos. Todo un hito esperado desde hace dieciséis años, pero que debemos poner en contexto. Lo primero, en NIF se llama Q al cociente entre la energía producida por el combustible (una pequeña esfera con deuterio y tritio) y la energía inyectada por los láseres; sin embargo, se requieren entre 300 y 400 megajulios [según Nature son 322 megajulios] de energía de la red eléctrica para generar los pulsos láser, con lo que en rigor solo ha logrado un Q ~ 0.01, muy lejos del Q = 0.7 del tokamak británico JET. Lo segundo, nadie sabe cómo generar electricidad en instalaciones de fusión pulsada como NIF, mientras que en los tokamaks está muy claro como hacerlo. Y lo tercero, tengo serias dudas sobre cómo se ha estimado la energía producida mediante tomografía de rayos X; en la rueda de prensa se ha dicho que se solicitó a un equipo externo de expertos una revisión por pares de la estimación, lo que sugiere que ellos mismos tenían dudas al respecto. Aún así el nuevo resultado pasará a los libros de historia de la energía de fusión.

Ahora mismo nadie sabe cómo se ha logrado este nuevo hito. Más abajo me atrevo a conjeturar dos posibilidades. Por ahora solo tenemos la rueda de prensa en YouTube (https://youtu.be/Eke5PawU7rE) donde no se ofrece ningún tipo de información técnica. Así que te recomiendo leer mis breves comentarios en la pieza de Antonio Martínez Ron, «Cuatro motivos para tomarse el anuncio de la energía de fusión con más calma», Next, Voz Pópuli, 13 dic 2022. Y también las piezas de Geoff Brumfiel, «U.S. reaches a fusion power milestone. Will it be enough to save the planet?» NPR, 13 Dec 2022; Jeff Tollefson, Elizabeth Gibney, «Nuclear-fusion lab achieves ‘ignition’: what does it mean?» News, Nature, 13 Dec 2022; «With historic explosion, a long sought fusion breakthrough,» News, Science, 13 Dec 2022; entre otras. [PS 14 dic 2022] También recomiendo leer a Iván Rivera, «¿Está la fusión nuclear comercial a la vuelta de la esquina?», Naukas, 14 dic 2022. [/PS]


No se sabe cómo se ha logrado este éxito; no se ha publicado ningún artículo, ni ningún informe técnico con los detalles de la medida. Ahora mismo solo se pueden conjeturar algunas cosas a partir de las últimas publicaciones de NIF. Por un lado, sugieren que se ha usado un hohlraum más pequeño (fuente). Lo habitual era usar uno cilíndrico de 6.4 mm diámetro con agujeros de entrada para los láseres de 3.1 mm de diámetro, pero las pruebas este año han usado uno de 6.2 mm con agujeros de 2.7 mm; al ser más pequeño se reducen las pérdidas de energía, se incrementa en ~6 % la energía en rayos X que recibe la esfera de combustible y se logra una implosión más simétrica y más eficiente.





También creo razonable conjeturar que se ha usado la técnica de doble choque (double shock) para lograr la ignición (fuente). En lugar de inyectar un único pulso con los 192 láseres, se inyectan dos pulsos, el primero más largo, de menor energía y frecuencia triple (3ω) usando 128 láseres y el segundo más corto, de mayor energía y a frecuencia doble (2ω) con los 64 láseres restantes. La relación energía/potencia del pico «azul» 3ω es de ~2MJ/500TW mientras que la del pico «verde» 2ω es de ~2.5MJ/850TW. Según las simulaciones magnetohidrodinámicas esta configuración permite una producción de energía unas cuatro veces superior a la técnica convencional que usa un solo choque (en las simulaciones una producción con un choque simple de 1.3 MJ, el anterior récord de NIF, pasa con un choque doble a 5.3 MJ). Como se ha logrado 3.15 MJ me parece razonable que se haya usado esta técnica.

Por supuesto esto son solo dos conjeturas mías que quizás sean completamente erróneas. Pero lo que parece claro es que el nuevo éxito tiene su origen en grandes cambios respecto a lo que se estaba probando a principios de este año.
Saludos.

Cadavre Exquis

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 18981
  • -Recibidas: 41026
  • Mensajes: 8404
  • Nivel: 508
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #253 en: Diciembre 17, 2022, 10:32:49 am »
Citar
France's Nuclear Reactor Has Been Delayed Again
Posted by BeauHD on Friday December 16, 2022 @09:02PM from the tougher-than-it-looks dept.

Welding problems will require a further six-month delay for France's next-generation nuclear reactor at Flamanville, the latest setback for the flagship technology the country hopes to sell worldwide, state-owned electricity group EDF said Friday. Barron's reports:
Citar
The delay will also add 500 million euros to a project whose total cost is now estimated at around 13 billion euros ($13.8 billion), blowing past the initial projection of 3.3 billion euros when construction began in 2007. It comes as EDF is already struggling to restart dozens of nuclear reactors taken down for maintenance or safety work that has proved more challenging than originally thought.

EDF also said Friday that one of the two conventional reactors at Flamanville would not be brought back online until February 19 instead of next week as planned, while one at Penly in northwest Farnce would be restarted on March 20 instead of in January. EDF said the latest problems at Flamanville, on the English Channel in Normandy, emerged last summer when engineers discovered that welds in cooling pipes for the new pressurized water reactor, called EPR, were not tolerating extreme heat as expected. As a result, the new reactor will be start generating power only in mid-2024.
Saludos.

wanderer

  • Sabe de economía
  • *****
  • Gracias
  • -Dadas: 63682
  • -Recibidas: 46878
  • Mensajes: 6830
  • Nivel: 773
  • wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.wanderer Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:STEM
« Respuesta #254 en: Diciembre 23, 2022, 15:08:25 pm »
El autopiloto de Tesla es lo más de lo más:

Citar
INCIDENTE EN CALIFORNIA
El Autopilot de Tesla vuelve a 'estrellarse': le acusan de causar una colisión en cadena
El conductor del Model S de 2021 asegura que llevaba la función de conducción autónoma completa cuando el automóvil se cambió de carril y redujo abruptamente la velocidad provocando un choque de ocho coches

https://www.elconfidencial.com/tecnologia/2022-12-23/tesla-autopilot-nuevo-accidente_3547156/
"De lo que que no se puede hablar, es mejor callar" (L. Wittgenstein; Tractatus Logico-Philosophicus).

Tags:
 


SimplePortal 2.3.3 © 2008-2010, SimplePortal