General > The Big Picture

STEM

<< < (2/86) > >>

Maloserá:
Resulta que un especial de tecnologíá lleva a otro y así sucesivamente. Voy seleccionando por orden semialeatorio, y así vamos leyendo lo que ha ído publicando The Economist en dosis manejables. A mí me viene bárbaro y así me obligo a leerlos. Iré subiendo algo cada semana. Espero que los artículos de The Economist complementen los otros más científico-puros, porque tienen ese interés en aplicación y repercusión económica en la cabeza cuando los escriben.

Empiezo por AI y sus limitaciones, artículo de Junio de 2020.

https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in

An understanding of AI’s limitations is starting to sink in
After years of hype, many people feel AI has failed to deliver, says Tim Cross

It will be as if the world had created a second China, made not of billions of people and millions of factories, but of algorithms and humming computers. pwc, a professional-services firm, predicts that artificial intelligence (ai) will add $16trn to the global economy by 2030. The total of all activity—from banks and biotech to shops and construction—in the world’s second-largest economy was just $13trn in 2018.

pwc’s claim is no outlier. Rival prognosticators at McKinsey put the figure at $13trn. Others go for qualitative drama, rather than quantitative. Sundar Pichai, Google’s boss, has described developments in ai as “more profound than fire or electricity”. Other forecasts see similarly large changes, but less happy ones. Clever computers capable of doing the jobs of radiologists, lorry drivers or warehouse workers might cause a wave of unemployment.

Yet lately doubts have been creeping in about whether today’s ai technology is really as world-changing as it seems. It is running up against limits of one kind or another, and has failed to deliver on some of its proponents’ more grandiose promises.

There is no question that ai—or, to be precise, machine learning, one of its sub-fields—has made much progress. Computers have become dramatically better at many things they previously struggled with. The excitement began to build in academia in the early 2010s, when new machine-learning techniques led to rapid improvements in tasks such as recognising pictures and manipulating language. From there it spread to business, starting with the internet giants. With vast computing resources and oceans of data, they were well placed to adopt the technology. Modern ai techniques now power search engines and voice assistants, suggest email replies, power the facial-recognition systems that unlock smartphones and police national borders, and underpin the algorithms that try to identify unwelcome posts on social media.

Perhaps the highest-profile display of the technology’s potential came in 2016, when a system built by DeepMind, a London-based ai firm owned by Alphabet, Google’s corporate parent, beat one of the world’s best players at Go, an ancient Asian board game. The match was watched by tens of millions; the breakthrough came years, even decades, earlier than ai gurus had expected.

As Mr Pichai’s comparison with electricity and fire suggests, machine learning is a general-purpose technology—one capable of affecting entire economies. It excels at recognising patterns in data, and that is useful everywhere. Ornithologists use it to classify birdsong; astronomers to hunt for planets in glimmers of starlight; banks to assess credit risk and prevent fraud. In the Netherlands, the authorities use it to monitor social-welfare payments. In China ai-powered facial recognition lets customers buy groceries—and helps run the repressive mass-surveillance system the country has built in Xinjiang, a Muslim-majority region.



ai’s heralds say further transformations are still to come, for better and for worse. In 2016 Geoffrey Hinton, a computer scientist who has made fundamental contributions to modern ai, remarked that “it’s quite obvious that we should stop training radiologists,” on the grounds that computers will soon be able to do everything they do, only cheaper and faster. Developers of self-driving cars, meanwhile, predict that robotaxis will revolutionise transport. Eric Schmidt, a former chairman of Google (and a former board member of The Economist’s parent company) hopes that ai could accelerate research, helping human scientists keep up with a deluge of papers and data.

In January a group of researchers published a paper in Cell describing an ai system that had predicted antibacterial function from molecular structure. Of 100 candidate molecules selected by the system for further analysis, one proved to be a potent new antibiotic. The covid-19 pandemic has thrust such medical applications firmly into the spotlight. An ai firm called BlueDot claims it spotted signs of a novel virus in reports from Chinese hospitals as early as December. Researchers have been scrambling to try to apply ai to everything from drug discovery to interpreting medical scans and predicting how the virus might evolve.

Dude, where’s my self-driving car?
This is not the first wave of ai-related excitement (see timeline in next article). The field began in the mid-1950s when researchers hoped that building human-level intelligence would take a few years—a couple of decades at most. That early optimism had fizzled by the 1970s. A second wave began in the 1980s. Once again the field’s grandest promises went unmet. As reality replaced the hype, the booms gave way to painful busts known as “ai winters”. Research funding dried up, and the field’s reputation suffered.

Many of the grandest claims made about AI have once again failed to become reality

Modern ai technology has been far more successful. Billions of people use it every day, mostly without noticing, inside their smartphones and internet services. Yet despite this success, the fact remains that many of the grandest claims made about ai have once again failed to become reality, and confidence is wavering as researchers start to wonder whether the technology has hit a wall. Self-driving cars have become more capable, but remain perpetually on the cusp of being safe enough to deploy on everyday streets. Efforts to incorporate ai into medical diagnosis are, similarly, taking longer than expected: despite Dr Hinton’s prediction, there remains a global shortage of human radiologists.

Surveying the field of medical ai in 2019, Eric Topol, a cardiologist and ai enthusiast, wrote that “the state of ai hype has far exceeded the state of ai science, especially when it pertains to validation and readiness for implementation in patient care”. Despite a plethora of ideas, covid-19 is mostly being fought with old weapons that are already to hand. Contacttracing has been done with shoe leather and telephone calls. Clinical trials focus on existing drugs. Plastic screens and paint on the pavement enforce low-tech distancing advice.

The same consultants who predict that ai will have a world-altering impact also report that real managers in real companies are finding ai hard to implement, and that enthusiasm for it is cooling. Svetlana Sicular of Gartner, a research firm, says that 2020 could be the year ai falls onto the downslope of her firm’s well-publicised “hype cycle”. Investors are beginning to wake up to bandwagon-jumping: a survey of European ai startups by mmc, a venture-capital fund, found that 40% did not seem to be using any ai at all. “I think there’s definitely a strong element of ‘investor marketing’,” says one analyst delicately.

This Technology Quarterly will investigate why enthusiasm is stalling. It will argue that although modern ai techniques are powerful, they are also limited, and they can be troublesome and difficult to deploy. Those hoping to make use of ai’s potential must confront two sets of problems.

The first is practical. The machine-learning revolution has been built on three things: improved algorithms, more powerful computers on which to run them, and—thanks to the gradual digitisation of society—more data from which they can learn. Yet data are not always readily available. It is hard to use ai to monitor covid-19 transmission without a comprehensive database of everyone’s movements, for instance. Even when data do exist, they can contain hidden assumptions that can trip the unwary. The newest ai systems’ demand for computing power can be expensive. Large organisations always take time to integrate new technologies: think of electricity in the 20th century or the cloud in the 21st. None of this necessarily reduces ai’s potential, but it has the effect of slowing its adoption.

The second set of problems runs deeper, and concerns the algorithms themselves. Machine learning uses thousands or millions of examples to train a software model (the structure of which is loosely based on the neural architecture of the brain). The resulting systems can do some tasks, such as recognising images or speech, far more reliably than those programmed the traditional way with hand-crafted rules, but they are not “intelligent” in the way that most people understand the term. They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dub “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.

Without another breakthrough, these drawbacks put fundamental limits on what ai can and cannot do. Self-driving cars, which must navigate an ever-changing world, are already delayed, and may never arrive at all. Systems that deal with language, like chatbots and personal assistants, are built on statistical approaches that generate a shallow appearance of understanding, without the reality. That will limit how useful they can become. Existential worries about clever computers making radiologists or lorry drivers obsolete—let alone, as some doom-mongers suggest, posing a threat to humanity’s survival—seem overblown. Predictions of a Chinese-economy-worth of extra gdp look implausible.

Today’s “ai summer” is different from previous ones. It is brighter and warmer, because the technology has been so widely deployed. Another full-blown winter is unlikely. But an autumnal breeze is picking up.

wanderer:
Muchas gracias, Maloserá, por traernos tu contribución dónde pone a la AI en su sitio. Menuda hipérbole hiperbólica...  :roto2: !!


--- Citar ---The second set of problems runs deeper, and concerns the algorithms themselves. Machine learning uses thousands or millions of examples to train a software model (the structure of which is loosely based on the neural architecture of the brain). The resulting systems can do some tasks, such as recognising images or speech, far more reliably than those programmed the traditional way with hand-crafted rules, but they are not “intelligent” in the way that most people understand the term. They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dub “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.

Without another breakthrough, these drawbacks put fundamental limits on what ai can and cannot do. Self-driving cars, which must navigate an ever-changing world, are already delayed, and may never arrive at all. Systems that deal with language, like chatbots and personal assistants, are built on statistical approaches that generate a shallow appearance of understanding, without the reality. That will limit how useful they can become. Existential worries about clever computers making radiologists or lorry drivers obsolete—let alone, as some doom-mongers suggest, posing a threat to humanity’s survival—seem overblown. Predictions of a Chinese-economy-worth of extra gdp look implausible.
--- Fin de la cita ---

Yo soy de la opinión de Penrose: no es que la IA en sentido estricto sea imposible, sino que la aproximación a ella con los métodos actuales ni siquiera se acercan a la misma, y si por casualidad tal cosa fuera posible, tendría que tener forzosamente consciencia, que no es simulable, por ser tanto una propiedad emergente como por sobrepasar lo que cualquier máquina de Turing pueda hacer.

Maloserá:

--- Cita de: wanderer en Enero 06, 2021, 18:58:03 pm ---Yo soy de la opinión de Penrose: no es que la IA en sentido estricto sea imposible, sino que la aproximación a ella con los métodos actuales ni siquiera se acercan a la misma, y si por casualidad tal cosa fuera posible, tendría que tener forzosamente consciencia, que no es simulable, por ser tanto una propiedad emergente como por sobrepasar lo que cualquier máquina de Turing pueda hacer.

--- Fin de la cita ---

Hace unos días Pollo escribió un post muy bueno en el hilo principal (ppcc/asustadísimos), no sé si lo has leído.

pollo:

--- Cita de: wanderer en Enero 06, 2021, 18:58:03 pm ---Muchas gracias, Maloserá, por traernos tu contribución dónde pone a la AI en su sitio. Menuda hipérbole hiperbólica...  :roto2: !!


--- Citar ---The second set of problems runs deeper, and concerns the algorithms themselves. Machine learning uses thousands or millions of examples to train a software model (the structure of which is loosely based on the neural architecture of the brain). The resulting systems can do some tasks, such as recognising images or speech, far more reliably than those programmed the traditional way with hand-crafted rules, but they are not “intelligent” in the way that most people understand the term. They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dub “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.

Without another breakthrough, these drawbacks put fundamental limits on what ai can and cannot do. Self-driving cars, which must navigate an ever-changing world, are already delayed, and may never arrive at all. Systems that deal with language, like chatbots and personal assistants, are built on statistical approaches that generate a shallow appearance of understanding, without the reality. That will limit how useful they can become. Existential worries about clever computers making radiologists or lorry drivers obsolete—let alone, as some doom-mongers suggest, posing a threat to humanity’s survival—seem overblown. Predictions of a Chinese-economy-worth of extra gdp look implausible.
--- Fin de la cita ---

Yo soy de la opinión de Penrose: no es que la IA en sentido estricto sea imposible, sino que la aproximación a ella con los métodos actuales ni siquiera se acercan a la misma, y si por casualidad tal cosa fuera posible, tendría que tener forzosamente consciencia, que no es simulable, por ser tanto una propiedad emergente como por sobrepasar lo que cualquier máquina de Turing pueda hacer.

--- Fin de la cita ---
Quizá no necesariamente, pero en mi opinión, por lógica, esa inteligencia debe tener alguna noción de lo que llamamos "sentido común", es decir, entender contextos y poder ejecutar simulaciones heurísticas de la realidad basándose en hechos generales sobre la realidad conocida, que es exactamente lo que hacemos nosotros cuando nos enfrentamos a problemas genéricos que no nos hemos encontrado antes: nos imaginamos lo que podría pasar (que es eso, una simulación aproximada en nuestra cabeza basada en nuestro sentido común) y probamos a ver qué pasa. Si no funciona, sacamos conclusiones contrastando nuestra simulación con el resultado en el mundo real, reajustamos y procedemos a ser algo más conocedores de la realidad.

Para ello, es imprescindible que tal modelo fuese entrenado en el mundo real, interactuando con todo tipo de información relacionada a conceptos (lo que hace un niño al crecer, que básicamente registra toda la información connotativa sobre todas y cada una de sus interacciones con la realidad).

Una vez llegado a ese punto, creo que los investigadores se encontrarán con el mismo problema que tenemos nosotros: si razonamos sobre pongamos, problemas matemáticos, desde el nivel de abstracción de nuestra mente, nunca podremos ser tan rápidos como una calculadora, ya que lo que permite que una máquina sea tan rápida es precisamente la ausencia de razonamiento complejo (que permite que los cálculos se resuelvan usando simples interacciones físicas dispuestas de manera razonada previamente por alguien que diseña el circuito.

O dicho de otra forma: creo que el precio a pagar por la inteligencia y el razonamiento general, es la imposibilidad de que se pueda hacer con la velocidad de una máquina. Para ello necesitas un mecanismo especializado en resolver problemas concretos sin razonar sobre ellos in situ.

Esto se corresponde con nuestra experiencia con habilidades complejas: cuando alguien lleva años haciendo algo, ni siquiera tiene que pensar en ello. Es una automatización. Sin embargo, cuando se aprende se es extremadamente torpe al principio, y sin embargo el "hardware" es el mismo.

La diferencia entre la IA actual y nosotros, es que nosotros somos capaces de llegar por nosotros mismos a esa automatización, y la IA actual jamás puede llegar salvo que alguien la programe/entrene específicamente para ello. Me aventuro a decir que sin un instinto de curiosidad, no es viable hacer esto en la práctica, y por eso los cachorros de todas las especies complejas tienen el sentido de la curiosidad desarrollado por evolución.

Maloserá:
Continúo con el especial AI de The Economist.

The business world: Businesses are finding AI hard to adopt
Not every company is an internet giant

“Facebook: the inside story”, Steven Levy’s recent book about the American social-media giant, paints a vivid picture of the firm’s size, not in terms of revenues or share price but in the sheer amount of human activity that thrums through its servers. 1.73bn people use Facebook every day, writing comments and uploading videos. An operation on that scale is so big, writes Mr Levy, “that it can only be policed by algorithms or armies”.

In fact, Facebook uses both. Human moderators work alongside algorithms trained to spot posts that violate either an individual country’s laws or the site’s own policies. But algorithms have many advantages over their human counterparts. They do not sleep, or take holidays, or complain about their performance reviews. They are quick, scanning thousands of messages a second, and untiring. And, of course, they do not need to be paid.

And it is not just Facebook. Google uses machine learning to refine search results, and target advertisements; Amazon and Netflix use it to recommend products and television shows to watch; Twitter and TikTok to suggest new users to follow. The ability to provide all these services with minimal human intervention is one reason why tech firms’ dizzying valuations have been achieved with comparatively small workforces.

Firms in other industries woud love that kind of efficiency. Yet the magic is proving elusive. A survey carried out by Boston Consulting Group and mit polled almost 2,500 bosses and found that seven out of ten said their ai projects had generated little impact so far. Two-fifths of those with “significant investments” in ai had yet to report any benefits at all.

Perhaps as a result, bosses seem to be cooling on the idea more generally. Another survey, this one by pwc, found that the number of bosses planning to deploy ai across their firms was 4% in 2020, down from 20% the year before. The number saying they had already implemented ai in “multiple areas” fell from 27% to 18%. Euan Cameron at pwc says that rushed trials may have been abandoned or rethought, and that the “irrational exuberance” that has dominated boardrooms for the past few years is fading.

There are several reasons for the reality check. One is prosaic: businesses, particularly big ones, often find change difficult. One parallel from history is with the electrification of factories. Electricity offers big advantages over steam power in terms of both efficiency and convenience. Most of the fundamental technologies had been invented by the end of the 19th century. But electric power nonetheless took more than 30 years to become widely adopted in the rich world.

Reasons specific to ai exist, too. Firms may have been misled by the success of the internet giants, which were perfectly placed to adopt the new technology. They were already staffed by programmers, and were already sitting on huge piles of user-generated data. The uses to which they put ai, at least at first—improving search results, displaying adverts, recommending new products and the like—were straightforward and easy to measure.

Not everyone is so lucky. Finding staff can be tricky for many firms. ai experts are scarce, and command luxuriant salaries. “Only the tech giants and the hedge funds can afford to employ these people,” grumbles one senior manager at an organisation that is neither. Academia has been a fertile recruiting ground.

A more subtle problem is that of deciding what to use ai for. Machine intelligence is very different from the biological sort. That means that gauging how difficult machines will find a task can be counter-intuitive. ai researchers call the problem Moravec’s paradox, after Hans Moravec, a Canadian roboticist, who noted that, though machines find complex arithmetic and formal logic easy, they struggle with tasks like co-ordinated movement and locomotion which humans take completely for granted.

For example, almost any human can staff a customer-support helpline. Very few can play Go at grandmaster level. Yet Paul Henninger, an ai expert at kpmg, an accountancy firm, says that building a customer-service chatbot is in some ways harder than building a superhuman Go machine. Go has only two possible outcomes—win or lose—and both can be easily identified. Individual games can play out in zillions of unique ways, but the underlying rules are few and clearly specified. Such well-defined problems are a good fit for ai. By contrast, says Mr Henninger, “a single customer call after a cancelled flight has…many, many more ways it could go”.

What to do? One piece of advice, says James Gralton, engineering director at Ocado, a British warehouse-automation and food-delivery firm, is to start small, and pick projects that can quickly deliver obvious benefits. Ocado’s warehouses are full of thousands of robots that look like little filing cabinets on wheels. Swarms of them zip around a grid of rails, picking up food to fulfil orders from online shoppers.

Ocado’s engineers used simple data from the robots, like electricity consumption or torque readings from their wheel motors, to train a machine-learning model to predict when a damaged or worn robot was likely to fail. Since broken-down robots get in the way, removing them for pre-emptive maintenance saves time and money. And implementing the system was comparatively easy.

The robots, warehouses and data all existed already. And the outcome is clear, too, which makes it easy to tell how well the ai model is working: either the system reduces breakdowns and saves money, or it does not. That kind of “predictive maintenance”, along with things like back-office automation, is a good example of what pwc approvingly calls “boring ai” (though Mr Gralton would surely object).

There is more to building an ai system than its accuracy in a vacuum. It must also do something that can be integrated into a firm’s work. During the late 1990s Mr Henninger worked on Fair Isaac Corporation’s (fico) “Falcon”, a credit-card fraud-detection system aimed at banks and credit-card companies that was, he says, one of the first real-world uses for machine learning. As with predictive maintenance, fraud detection was a good fit: the data (in the form of credit-card transaction records) were clean and readily available, and decisions were usefully binary (either a transaction was fraudulent or it wasn’t).

The widening gyre

But although Falcon was much better at spotting dodgy transactions than banks’ existing systems, he says, it did not enjoy success as a product until fico worked out how to help banks do something with the information the model was generating. “Falcon was limited by the same thing that holds a lot of ai projects back today: going from a working model to a useful system.” In the end, says Mr Henninger, it was the much more mundane task of creating a case-management system—flagging up potential frauds to bank workers, then allowing them to block the transaction, wave it through, or phone clients to double-check—that persuaded banks that the system was worth buying.

Because they are complicated and open-ended, few problems in the real world are likely to be completely solvable by ai, says Mr Gralton. Managers should therefore plan for how their systems will fail. Often that will mean throwing difficult cases to human beings to judge. That can limit the expected cost savings, especially if a model is poorly tuned and makes frequent wrong decisions.

The tech giants’ experience of the covid-19 pandemic, which has been accompanied by a deluge of online conspiracy theories, disinformation and nonsense, demonstrates the benefits of always keeping humans in the loop. Because human moderators see sensitive, private data, they typically work in offices with strict security policies (bringing smartphones to work, for instance, is usually prohibited).

In early March, as the disease spread, tech firms sent their content moderators home, where such security is tough to enforce. That meant an increased reliance on the algorithms. The firms were frank about the impact. More videos would end up being removed, said YouTube, “including some that may not violate [our] policies”. Facebook admitted that less human supervision would likely mean “longer response times and more mistakes”. ai can do a lot. But it works best when humans are there to hold its hand.

Navegación

[0] Índice de Mensajes

[#] Página Siguiente

[*] Página Anterior

Ir a la versión completa