www.transicionestructural.NET es un nuevo foro, que a partir del 25/06/2012 se ha separado de su homónimo .COM. No se compartirán nuevos mensajes o usuarios a partir de dicho día.
0 Usuarios y 5 Visitantes están viendo este tema.
Moshe Vardi: Robots Could Put Humans Out of Work by 2045Robots began replacing human brawn long ago—now they’re poised to replace human brains. Moshe Vardi, a computer science professor at Rice University, thinks that by 2045 artificially intelligent machines may be capable of “if not any work that humans can do, then, at least, a very significant fraction of the work that humans can do.”So, he asks, what then will humans do?In recent writings, Vardi traces the evolution of the idea that artificial intelligence may one day surpass human intelligence, from Turing to Kurzweil, and considers the recent rate of progress. Although early predictions proved too aggressive, in the space of 15 years we’ve gone from Deep Blue beating Kasparov at chess to self-driving cars and Watson beating Jeopardy champs Ken Jennings and Brad Rutter.Extrapolating into the future, Vardi thinks it’s reasonable to believe intelligent machines may one day replace human workers almost entirely and in the process put millions out of work permanently.Once rejected out of hand as neo-Luddism, technological unemployment is attracting commentary from an increasingly vocal sect of economists. Highlighted in a recent NYT article and “60 Minutes” segment, Erik Brynjolfsson and Andrew McAfee of MIT also discuss the impact of automation on employment in their book, Race Against the Machine.The idea is we may be approaching a kind of economic singularity, after which the labor market as we know it will cease to exist.The theory is tempting for its simplicity but hard to prove. In my opinion, though you can list anecdotes and interpret select statistics showing the negative effects of automation—the qualitative historical record, that the labor market will evolve and adapt, remains the weightier body of evidence.Relying on modern statistics to prove something fundamental has changed is troublesome because you can’t do rigorous, apples-to-apples comparisons with most of the technological revolutions of the past centuries. The data get dodgier and the statistical methodologies change the farther back you go.Are machines really replacing humans faster now than say in the early 19th or 20th centuries? And are workers really falling behind at a greater rate? We can’t say with certainty.However, we can say that accelerating technology over the last few centuries has consistently erased some jobs only to replace them with other jobs. In the short and medium term, these transition periods have caused discomfort and vicious battles in the political arena. But the long-term outcome has been largely positive—that is, improving living standards thanks to cheaper, better goods and services.By dismissing qualitative historical evidence as newly irrelevant, you’re left with a quantitative vacuum into which you can inject any number of competing theories, fascinating but as yet impossible to prove or disprove.As you may have gathered, I fall into the boring mainstream on the subject. To me, the technological unemployment thesis is too dire and what humans will do too hard to imagine. But just because we can’t imagine something, doesn’t mean it won’t exist.While microchips are just now beginning to replace human brains, machines have been replacing human brawn for years. And yet workers are still paid to perform many physical jobs that were automated long ago and a number of new ones to boot. Why is that?Assembly line products are cheaper, but folks still place a premium on and desire “handmade” items. Some people feel good about supporting an artisan; others believe the products are better quality; many value something’s distinctiveness, looking down their nose at assembly line monotony. None of these reasons are perfectly rational, but the economy is seldom rational on the level of the individual.Further, physical activities that used to be classified as leisure activities now command an income. In the past, sports were at most an amateur activity for those who could afford the time to play them. However, in the 19th and 20th centuries, as countries industrialized, a giant new market in athletics popped into existence.I imagine a futurist at the beginning of the Industrial Revolution finding the idea preposterous. But today’s best pro athletes collect paychecks that would make an investment banker blush. And it’s not just the top athletes getting paid. There are lower tiers for the less skilled too—utility players, backups, smaller market pro leagues, or feeder leagues all pay modest but livable incomes.Why shouldn’t the same hold true for activities of the mind?Perhaps in the future, while some of us work hard to build and program super-intelligent machines, others will work hard to entertain, theorize, philosophize, and make uniquely human creative works, maybe even pair with machines to accomplish these things. These may seem like niche careers for the few and talented. But at the beginning of the Industrial Revolution, jobs of the mind in general were niche careers.Now, as some jobs of the mind are automated, more people are doing creative work of some kind. In the past, not many writers earned a living just writing. But the Internet’s open infrastructure and voracious appetite for content allows writers of all different levels of skill to earn income. The same holds true for publishing—50 Shades of Grey isn’t exactly literature, but it’s sold millions—and music, film, design, you name it.How will the economy make the transition? The same way it has for the last several hundred years—with a few (or more than a few) bumps. But maybe these job-stealing exponential technologies are also empowering humans with exponential adaptation.Online courses from Coursera and edX and Udacity make education more specialized, shorter in duration, and either cheap or free. This model may allow for faster more affordable acquisition of new skills and smoother economic adaptation. The belief many people are only capable of unskilled labor is elitist to the extreme. The problem of acquiring new skills is largely one of access not intelligence.There are those who think our great grandchildren simply won’t work. But I can’t imagine such a future. The developed world could have rested on its laurels years ago, having automated the means of production for essentials like food or clothing or cars or televisions (the essentials change as they get cheaper).But we’re working harder than ever. Why? Work lends meaning to life and leisure. When one kind of work goes away, we tend to create something productive to replace it. And life is richer when we get to trade the fruit of our labors for the vegetables or lines of code or smartphones of other people’s labors.Vardi says, “The world in 50 years…either will be a utopia or a dystopia.” But history is littered with dystopic and utopian visions, even as the world has consistently muddled along the middle path.Image Credit: Max Kiesler/Flickr, Thierry Ehrmann/Flickr
http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automationCitarWelcome, Robot Overlords. Please Don't Fire Us?Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.Illustrations by Roberto ParadaThis is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law, which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke. Sure, it's a billion times more than the first computer had, but it's still not much more than the computing power of a hamster.This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov, and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars, and in another decade they may be commercially available.The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though. Computing power is measured in calculations per second—a.k.a. floating-point operations per second, or "flops"—and the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run, and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM supercomputer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020.That might be optimistic. At the same time, it also might turn out that we don't need to model a human brain in the first place. After all, when the Wright brothers built the first airplane, they didn't model it after a bird with flapping wings. Just as there's more than one way to fly, there's probably more than one way to think, too.Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.In other words, you should still be impressed. When we think of human cognition, we usually think about things like composing music or writing a novel. But a big part of the human brain is dedicated to more prosaic functions, like taking in a chaotic visual field and recognizing the thousands of separate objects it contains. We do that so automatically we hardly even think of it as intelligence. But it is, and the fact that Google's car can do it at all is a real breakthrough.The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.Nonetheless, in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.In other words, by about 2040 our robot paradise awaits.
Welcome, Robot Overlords. Please Don't Fire Us?Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.Illustrations by Roberto ParadaThis is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law, which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke. Sure, it's a billion times more than the first computer had, but it's still not much more than the computing power of a hamster.This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov, and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars, and in another decade they may be commercially available.The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though. Computing power is measured in calculations per second—a.k.a. floating-point operations per second, or "flops"—and the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run, and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM supercomputer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020.That might be optimistic. At the same time, it also might turn out that we don't need to model a human brain in the first place. After all, when the Wright brothers built the first airplane, they didn't model it after a bird with flapping wings. Just as there's more than one way to fly, there's probably more than one way to think, too.Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.In other words, you should still be impressed. When we think of human cognition, we usually think about things like composing music or writing a novel. But a big part of the human brain is dedicated to more prosaic functions, like taking in a chaotic visual field and recognizing the thousands of separate objects it contains. We do that so automatically we hardly even think of it as intelligence. But it is, and the fact that Google's car can do it at all is a real breakthrough.The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.Nonetheless, in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.In other words, by about 2040 our robot paradise awaits.
...........................La inteligencia no es cuestión de potencia bruta de cálculo ni de rapidez en el mismo, es algo muchísimo más complejo, que ni los neurocientíficos entienden muy bien.
Lo que está sucediendo es que nos están sometiendo a un proceso de *saqueo* CALCADO, a los procesos neoliberales que practicaron con latinoamérica con la excusa de la "crisis de la deuda" desde los 70, 80 y 90
Cita de: pollo en Mayo 18, 2013, 15:34:55 pm...........................La inteligencia no es cuestión de potencia bruta de cálculo ni de rapidez en el mismo, es algo muchísimo más complejo, que ni los neurocientíficos entienden muy bien.Yo sólo quiero señalar que uno de los grandes divulgadores de las neurociencias, Antonio Damasio, no sólo también se muestra escéptico de que por mucha potencia de cálculo de que dispongamos nos acerquemos de ninguna manera significativa a lo que una verdadera inteligencia es, sino que ciñéndonos a la humana, hay razones de peso para dudar que un cerebro humano desconectado de su cuerpo (y de todo su sensorio), pudiera mantener una inteligencia en absoluto (suponiendo también, claro, que se le suministrara soporte vital).
El Deep Blue era capaz de calcular 200 millones de posiciones por segundo, pero carecía de intuición aunque ésta sea -según alguna línea de estudio científico- producto del cálculo.La competición fue muy interesante porque se enfrentaba a la primera máquina que no aceptaba el sacrificio de una figura mayor como ventajosa.Estoy con Pollo, la inteligencia artificial se basa en la fuerza bruta. Nada más.
La agencia espacial estadounidense, la NASA, está planeando financiar el desarrollo de una impresora tridimensional capaz de producir comida.Su empresa asociada pretende crear impresoras que tomen los nutrientes básicos en polvo y, al depositarlos en capas, construir productos comestibles, con distintos sabores, del mismo modo en que las impresoras 3D construyen productos plásticos.Los nutrientes en polvo tienen una vida muy larga en los estantes, por lo que son apropiados para prolongadas misiones espaciales, Pero la compañía afirma que la tecnología también podría resolver problemas en la Tierra, donde el crecimiento de la población amenaza con sobrepasar a la producción de alimentos.El proyecto aún se encuentra en su fase inicial.
Gran idea, así habrá todavía más comida para acumular y desperdiciar mientras se sigue pasando hambre por ahí abajo.