* Blog


* Últimos mensajes


* Temas mas recientes

La revuelta de Ucrania por el malo
[Hoy a las 16:34:17]


Tema: PPCC-Pisitófilos Creditófagos-Otoño 2022 por sudden and sharp
[Hoy a las 15:07:18]


XTE-Central 2022-Catakrac por saturno
[Hoy a las 11:16:42]


Coches electricos por Mad Men
[Ayer a las 13:48:33]


Autor Tema: El fin del trabajo  (Leído 724010 veces)

0 Usuarios y 1 Visitante están viendo este tema.

Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2025 en: Junio 06, 2022, 22:01:21 pm »
Citar
Fearing Lawsuits, Factories Rush To Replace Humans With Robots in South Korea
Posted by msmash on Monday June 06, 2022 @11:21AM from the closer-look dept.

An anonymous reader shares a report:
Citar
Kim Yong-rae is the CEO of Speefox, South Korea's biggest manufacturer of capacitors, and he thinks robots are key to the company's survival. On his factory floor, free-standing machines squeal as they spit out gleaming sheets of aluminum that roll into coils. The air is filled with the rhythmic thud of stamping and the buzzing of machinery moving continuously, on the ground and overhead. Capacitors are essential to almost every electronic device, and these will end up in thousands of smartphones, cameras, and home appliances. "Throughout our history, we've always had to find ways to stay ahead," Kim told Rest of World. "Automation is the next step in that process." Speefox's factory is 75% automated, representing South Korea's continued push away from human labor. Part of that drive is labor costs: South Korea's minimum wage has climbed, rising 5% just this year.

But the most recent impetus is legal liability for worker death or injury. In January, a law came into effect called the Serious Disasters Punishment Act, which says, effectively, that if workers die or sustain serious injuries on the job, and courts determine that the company neglected safety standards, the CEO or high-ranking managers could be fined or go to prison. Experts and local media say that the law has shaken the heavy industry and construction sectors. Along with pushing the companies to invest to make workplaces safer, they point out, it's triggered a ramp-up of automation in order to require fewer workers -- or, ideally, none at all.
Saludos.

Saturio

  • Netocrata
  • ****
  • Gracias
  • -Dadas: 834
  • -Recibidas: 23025
  • Mensajes: 2928
  • Nivel: 611
  • Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2026 en: Julio 14, 2022, 11:27:20 am »
El permiso que les han concedido les permite circular entre las 10 p.m y las 6 a.m., y aunque les han dado vía libre para ofrecer el servicio al público, por ahora las pruebas las han limitado a empleados de Cruise.

Sea como fuere, como comenta Kyle Vogt al principio del vídeo:

Citar
Just imagining what other people are thinking right now. Like" we're in San Francisco". So I don't think people have seen a car going around pulling up at traffic lights, when you make eye contact and look over there's no one there, that's gotta be as crazy for the people at the road tonight.as it is gonna be for me riding in one of this things.

Citar
GM's Cruise starts testing fully driverless taxi rides in San Francisco
Cruise co-founder Kyle Vogt took the first ride.
Steve Dent @stevetdent | November 4th, 2021


GM's self-driving Cruise division has launched its fully driverless robo-taxi service in San Francisco, with co-founder and President Kyle Vogt getting the first ride, TechCrunch reported. To start with, the service will be offered only to GM employees, as it's still only licensed for testing.

"Earlier this week, I requested a ride through our Cruise app and took several back-to-back rides in San Francisco — with no one else in the vehicle," Vogt wrote in a YouTube video description. "There are lots of other Cruise employees (not just me) who are testing and refining the full customer experience as we take another major step toward the first commercial AV [ride hailing] product in a dense urban environment."


Vogt said the Cruise launched the Bolt vehicles on Monday at 11PM, and it "began to roam around the city, waiting for a ride request." He got his first ride from a Cruise Bolt EV called "Sourdough," saying the experience was "smooth." A separate video showed sections before and after the vehicle picked up passengers while it was in "ghost mode" with no one in it.

Early last month, Cruise received a California DMV permit to operate the service between the hours of 10PM and 6AM at a maximum speed of 30 MPH in mild weather conditions (no worse than light rain and fog). It's allowed to run them without drivers and charge for delivery services, but not ride-hailing. For paid robo-taxi rides, it must apply for a final permit with the California Public Utilities Commission.

GM recently launched its "Ultra Cruise" system for passenger vehicles, promising that it will "ultimately enable hands-free driving in 95 percent of all driving scenarios." The company has spent 10 million miles testing the system, and its previous Super Cruise has generally garnered positive reviews compared to rival systems like Tesla's Autopilot. 

Update 11/4/2021 1:19 PM ET: Cruise told Engadget that it is currently only offering fully driverless rides to employees right now. While it is allowed under its testing permit to offer free rides to the public as well, it's not doing so yet. The headline has been changed to emphasize that the program is still in the testing phase, and a reference to rides being available to certain members of the public has been removed.
Saludos.

Seguimiento de la situación de Cruise. Desde que a finales de junio se pusiesen a funcionar cobrando a los pasajeros, han tenido una serie de incidentes, digamos menores o anecdóticos.

Este es algo más curioso. Según parece los taxis autónomos decidieron juntarse en una esquina y quedarse parados. Teóricamente se pueden controlar remotamente cuando se empanan, pero en este caso los empleados de cruise tuvieron que ir y sacarlos conduciendo manualmente los coches.

A los mal pensados esto les hace sospechar que los coches de cruise no son en absoluto autónomos y que son conducidos en remoto (los sensores y toda la parafernalia ayudarían al conductor remoto). La red perdió el contacto con los coches y los conductores remotos no podían conducirlos. Podría ser.

Several Cruise AVs Stop, Block SF Traffic

https://www.adaptautomotive.com/articles/1931-several-cruise-avs-stop-block-sf-traffic

Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2027 en: Julio 14, 2022, 18:19:47 pm »
Esto también es muy relevante:


Andrej Karpathy ha sido, durante los últimos cinco años, el responsable del area de AI de Tesla (i.e. el Autopilot) y es una persona que se graduó en 2015 en Standord con la doctora Fei-Fei Li, la principal responsable de ImageNet, el proyecto que supuso un antes y un después en el reconocimiento de imágenes allá por el año 2012.

Saludos.

Saturio

  • Netocrata
  • ****
  • Gracias
  • -Dadas: 834
  • -Recibidas: 23025
  • Mensajes: 2928
  • Nivel: 611
  • Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2028 en: Julio 15, 2022, 10:40:01 am »
Esto también es muy relevante:


Andrej Karpathy ha sido, durante los últimos cinco años, el responsable del area de AI de Tesla (i.e. el Autopilot) y es una persona que se graduó en 2015 en Standord con la doctora Fei-Fei Li, la principal responsable de ImageNet, el proyecto que supuso un antes y un después en el reconocimiento de imágenes allá por el año 2012.

Saludos.

Esto ha sido muy comentado. Por un lado Elon dice que un año tiene los coches totalmente autónomos y por otro, su responsable máximo se va de la empresa. ¿A un año de conseguir el Santo Gríal?.

Quizás Andrej se ha cansado del mundo corporativo, ha ganado pasta más que suficiente y ahora se dedicará a proyectos personales menos lucrativos.

Quién sabe.


Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil

Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2030 en: Julio 17, 2022, 19:23:15 pm »

Created the First Ever AI Cover for Cosmopolitan Magazine!

Citar

The World’s Smartest Artificial Intelligence Just Made Its First Magazine Cover

By Gloria Liu | Jun 21, 2022


On a Monday afternoon in June of the year 2022 AD, six women on Zoom type increasingly bizarre descriptions into a search field.

  • “A young woman’s hand with nail polish holding a cosmopolitan cocktail.”
  • “A fashionable woman close up directed by Wes Anderson.”
  • “A woman wearing an earring that’s a portal to another universe.”

The group, composed of editors from Cosmopolitan, members of artificial-intelligence research lab OpenAI, and a digital artist—Karen X. Cheng, the first “real-world” person granted access to the computer system they’re all using—are working together, with this system, to try to create the world’s first magazine cover designed by artificial intelligence.

DALL-E 2’s vision of a young woman holding a cocktail.

Sure, there have been other stabs. AI has been around since the 1950s, and many publications have experimented with AI-created images as the technology has lurched and leaped forward over the past 70 years. Just last week, The Economist used an AI bot to generate an image for its report on the state of AI technology and featured that image as an inset on its cover.

This Cosmo cover is the first attempt to go the whole nine yards.

But the portal-to-another-universe-earring thing isn’t working. “It looks like Mary Poppins,” says Mallory Roynon, creative director of Cosmopolitan, who appears unruffled by the fact that she’s directing an algorithm to assist with one of the more important functions of her job. (Nor should she be ruffled—more on that later.)

Back to something more basic then. Cheng types a fresh request into the text box: “1960s fashionable woman close up, encyclopedia-style illustration.” The AI thinks for 20 seconds. And then: Six high-quality illustrations of women, each unique, appear on the screen.

Six images that didn’t exist until right now

This technology is a creation of OpenAI called DALL-E 2. It’s an artificial intelligence that takes verbal requests from users and then, through its knowledge of hundreds of millions of images across all of human history, creates its own images—pixel by pixel—that are entirely new. Type “bear playing a violin on a stage” and DALL-E will make it for you, in almost any style you want. You can depict your ursine virtuoso “in watercolor,” “in the style of van Gogh,” or “in synthwave,” a style the Cosmo team favors for perhaps obvious reasons.

See: obvious reasons.

The results are shockingly good, which is why, since its limited release in April, DALL-E 2 has inspired both awe and trepidation from the people who have seen what it can do. The Verge declared that DALL-E “Could Power a Creative Revolution.” The Studio, a YouTube channel by tech reviewer Marques Brownlee, wondered, “Can AI Replace Our Graphic Designer?

By the end of the Zoom meeting, a cover is close. It’s taken less than an hour. This is wild to witness. And, yes, a little scary. And it raises serious questions far beyond the scope of magazine design: about art, about ethics, about our future.

Watching it work though? It makes your jaw drop.

This is an exclusive first look at DALL-E 2’s yet-to-be-released “outpainting” feature, which allows users to extend DALL-E’s images…and allows DALL-E to imagine the world beyond their borders.

DALL-E’s creators don’t like to anthropomorphize it, and for good reason—contemplating AI as an autonomous entity freaks people out. Just see the recent news about Google engineer Blake Lemoine, who was put on probation for claiming that his conversations with the company’s AI chatbot, LaMDA, proved it had a soul and should need to grant engineers permission before being experimented on. Most independent experts, as well as Google itself, were quick to dismiss the idea, pointing out that if AI seems human, it’s only because of the massive amounts of data that humans have fed it.

In fact, this kind of AI is fundamentally designed to imitate us. DALL-E is powered by a neural network, a type of algorithm that mimics the workings of the human brain. It “learns” what objects are and how they relate to each other by analyzing images and their human-written captions. DALL-E product manager Joanne Jang says it’s like showing a kid flash cards: If DALL-E sees a lot of pictures of koalas captioned “koala,” it learns what a koala looks like. And if you type “koala riding motorcycle,” DALL-E draws on what it knows about koalas, motorcycles, and the concept of riding to put together a logical interpretation. This understanding of relationships can be keen and contextual: Type “Darth Vader on a Cosmopolitan magazine cover” and DALL-E doesn’t just cut and paste a photo of Darth; it dresses him in a gown and gives him hot-pink lipstick.

The words are jumbled, by the way, because the current version of DALL-E was trained to prioritize artistry over language comprehension. For the real Cosmo cover, creative director Mallory Roynon placed the logo and coverlines herself.

All this represents a major breakthrough in AI, says Drew Hemment, a researcher and lead for the AI & Arts program at the Alan Turing Institute in London. “It is phenomenal, what they have achieved,” he says. There are many following suit: Last month, Google released a similar AI called Imagen, and a comparable generator called Midjourney, which The Economist used for its aforementioned cover image, was released in beta around the same time as DALL-E 2. There’s even a DALL-E “light,” now called Craiyon, made by the open-source community for public use.

That said, the technology is far from perfect. DALL-E is still in what OpenAI calls a “preview” phase, being released to just a thousand users a week as engineers continue to make tweaks. If you ask for something the model hasn’t seen before, for example, it’ll provide its best guess, which can be wacky. Despite the generally high quality of the images it renders, areas requiring finer details often turn out blurry or abstract. Perhaps most problematically, the majority of the people it renders, due to the biased data sets it’s seen, are white. And perhaps most surprisingly, it has a hard time figuring out how many fingers humans are supposed to have—to the machine, the number of fingers seems as arbitrary as the number of leaves on a tree.

But DALL-E is imperfect also by design. It’s intentionally bad at rendering photorealistic faces, instead generating wonky eyes or twisted lips on purpose in an effort to protect against the tech being used to make deepfakes or pornographic images, which disproportionately harm women.

These land mines are part of the reason OpenAI is releasing DALL-E slowly, so they can observe user behavior and refine its system of safeguards against misuse. For now, those safeguards include removing sexually explicit images from those hundreds of millions of images used to train the model, prohibiting and flagging the use of hate speech, and instituting a human review process. DALL-E also has a content policy that asks users to adhere to ethical guidelines, like not sharing any photorealistic faces DALL-E may accidentally generate and not removing the multicolored signature in the bottom right corner that indicates an image was made by AI. And there’s an ongoing effort to make the data set less biased and more diverse; in the month Cosmo spent poking around, results already started to yield more representative subjects.

Despite DALL-E’s limitations, intentional and otherwise, its small but growing number of users are forging ahead, posting images on social media at a fever pitch lately—playing around with DALL-E and its knockoffs and sharing thousands of their results, like this and this and this. OpenAI does eventually plan to monetize all this interest by charging users for access to its interface and intends to carefully position it as an artist’s tool, not her replacement—a “creative copilot,” as OpenAI’s Jang puts it. Codex, another of OpenAI’s innovations, writes software based on normal-language directives as opposed to coding lingo and has streamlined and democratized parts of the software development process as a result. In the same way, Jang says she sees DALL-E 2 streamlining essential parts of the creative process like mood-boarding and conceptualizing.

Experts I spoke to generally agreed that while fears of AI replacing visual artists are not totally unfounded, the technology will also create new opportunities and possibly entire new art forms. Independent UK-based AI art curator Luba Elliott says she also hopes it can bring more women to the field of AI-generated art, where they’re less represented.

Outtakes from the Cosmo cover, to the tune of “a strong female president astronaut warrior walking on the planet Mars, digital art synthwave.”

Cheng, the digital artist working with Cosmo, used DALL-E to make a music video for Nina Simone’s “Feeling Good” and is now using it to design a dress that bursts into geometric shapes when it’s viewed through an augmented reality filter. A video director by trade, Cheng says that in the past, she’s been limited as a visual artist because she can’t draw. “Now I have the power of all these different kinds of artists,” she says. DALL-E has become part of her day-to-day workflow and has drastically sped up her creative process.

“But I don’t want to sugarcoat it either,” Cheng wrote in an Instagram caption accompanying her music video. “With AI, we are about to enter a period of massive change in all fields, not just art. Many people will lose their jobs. At the same time, there will be an explosion of creativity and possibility and new jobs being created—many that we can’t even imagine right now.”

AI touches almost every part of our lives, from the electronic systems in our cars to the TikTok filters that give us Pamela Anderson eyebrows to our increasingly polarized social feeds and the proliferation of fake news. While AI itself is not new, “it is now a very powerful technology,” says Eduardo Alonso, director of the Artificial Intelligence Research Centre at City, University of London, and “we are starting to consider the ethical and legal impacts of what we are doing.” But technology tends to be a step ahead of the law, he says, so until the law catches up, it’s on the industry itself to set a code of conduct.

OpenAI’s stated mission is to work toward creating an artificial general intelligence (an AGI) that accomplishes two things: first, the ability to perform any task, not just the one’s it’s explicitly asked. That’s why tech like DALL-E is such a big step—it’s an attempt at giving AGI the sense of sight. “For an AGI to fully understand the world, it needs vision,” says Jang. “Up to now, we’ve taught it to be good at reasoning, but now it can look at things and we can incorporate visual reasoning.” This could pave the path for other senses, too, so that one day, Jang says, an AGI could process all the things a human can process. The second, and even loftier, goal? To create an AGI that “benefits all of humanity,” and the experts I spoke with seem to believe the company is genuinely committed to deploying AI responsibly. But others say that any AGI could have dangerous or even catastrophic consequences, like becoming a surveillance tool for authoritarian governments or becoming the operating system that enables autonomous weapons systems.

Ultimately, because a true AGI doesn’t yet exist, we still don’t and can’t know—but every day, whether we’re ready or not, we’re closer to finding out.

Back in the virtual conference room, the Cosmo and OpenAI group is tooling around with the cocktail cover idea, trying to put various miniature objects into the glass, like a sailboat or a tiny woman on a pool float. But the vision seems almost too surreal for DALL-E, which appears confused—“a woman taking a bubble bath in a martini glass” just generates a creepy face floating beneath the surface of the liquid.

Then DALL-E suggests a new idea everyone loves: putting a goldfish in the glass. It almost feels like the AI and the humans are riffing off each other.

DALL-E adds a goldfish.

An hour in, the team has stalled. The martini glass images look too clip-art-y to make for a satisfying Cosmo cover, and the deadline is nigh. (My deadline is nigher still, and I find myself wishing an AI would write my story.) When the group signs off, the fate of the cover feels uncertain.

The next morning, though, an email attachment in my inbox: an image of a decidedly feminine, decidedly fearless astronaut in an extraterrestrial landscape, striding toward the reader. It’s DALL-E’s interpretation of Cheng’s prompt from overnight, “wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art,” and it’s stunning.

The image encapsulates the reasons OpenAI wanted to work with Cosmo and Cosmo wanted to work with OpenAI—reasons Natalie Summers, an OpenAI communications rep who also runs its Artist Access program, put best in email after seeing the cover: “I believe there will be women who see this and a door will open for them to consider going into the AI and machine learning fields—or even just to explore how AI tools can enhance their work and their lives. Women will be better equipped to lead in this next chapter of what it means to coexist with, and determine the course of, increasingly powerful technology. That badass woman astronaut is how I feel right now: swaggering on into a future I am excited to be a part of.”

Members of the team futz with the image in DALL-E over the next 24 hours—Cheng uses an impressive experimental feature, not yet available to users, that draws on the context of the image to “extend” it to the correct cover proportions—and by the next day, Cosmo has a cover.

Observing this process, I think, This sure is a lot of human effort for an AI-generated magazine cover. My initial takeaway is that DALL-E truly is an artist’s tool—one that can’t create without the artist. Which might ultimately be the point.

At a family wedding in the midst of all this, I met a renowned backdrop painter, Sarah Oliphant. Over a 45-year career, Oliphant—an artist in the classical sense, one who paints with a brush and draws with a pen—has painted backdrops for some of the world’s most famous photographers, including Annie Liebovitz and Patrick Demarchelier. I told her about DALL-E and asked what did she think? When AI can write poetry and make art, what did that say about…well, art?

“All art is borrowed,” she said. “Every single thing that’s ever done in art—we’re all just mimicking and forging and copying the human experience.” Everything we’ve ever seen that’s been meaningful to us, she said, becomes the inspiration we draw from when we’re creating art.

A data set, if you will.

She showed me a painting she made as a wedding gift, a fantastically bizarre and lavish creation that depicted the groom as a plump baby sitting in a bird’s nest in a mystical forest, surrounded by sumptuous cakes and mischievous fairies, each of which has the face of the bride. To paint it, Oliphant referenced a photo of the man as a baby, a book of Victorian fairy paintings, and a photo of a bird’s nest—as well as, probably, every baby, picture of a fairy, and bird’s nest she’s seen in her life.

But what about the fact that DALL-E can generate art almost instantaneously? Does that make a difference?

Not to Oliphant. “It’s what art evokes in the viewer that makes it valuable, not how long it took,” she said. This painting took her nine months, but “if the computer can generate a piece of art that I look at and I’m overwhelmed by its beauty or what it evokes, or I see it as intrinsically fascinating, then that’s just as valuable.”

So she didn’t feel threatened?

Oliphant laughed. “I’d say to that computer, good luck,” she said. “Okay, computer, you try to paint a backdrop as beautiful as I can. Go for it.”

A few days after our conversation, I type into DALL-E’s text box: “baby wearing flower wreath in a mystical forest at night, surrounded by fairies and cakes.” I wait. I realize: I’m nervous.

When the images populate 20 seconds later, I actually say “whoa” out loud. They’re eerily similar in mood and composition to Oliphant’s, and all the elements are there…the baby’s rosy cheeks, the gossamer fairy wings. But they’re unsettling to see after the original, and I realize why: DALL-E is representing a different data set, different experiences, a different worldview.

To call it a tool understates its capabilities. It’s not merely a paintbrush that an artist can wield to express her whims directly—if it were, she’d be able to paint a woman taking a bubble bath in a martini glass. Instead, it brings something of its own.

I save one of the better versions of the baby with his head tilted skyward, hands outreached, and enlarge it on my screen. I lean in and search the image, trying to distinguish which part was imagined by the human and which part by the machine.


This article has been updated to reflect DALL-E Mini’s name change to Craiyon.
Saludos.

el malo

  • Ha sido citado por PPCC
  • ***
  • Gracias
  • -Dadas: 8134
  • -Recibidas: 7816
  • Mensajes: 769
  • Nivel: 93
  • el malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influenciael malo Tiene mucha influencia
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2031 en: Julio 18, 2022, 12:04:56 pm »

Created the First Ever AI Cover for Cosmopolitan Magazine!

Citar

The World’s Smartest Artificial Intelligence Just Made Its First Magazine Cover

By Gloria Liu | Jun 21, 2022


On a Monday afternoon in June of the year 2022 AD, six women on Zoom type increasingly bizarre descriptions into a search field.

  • “A young woman’s hand with nail polish holding a cosmopolitan cocktail.”
  • “A fashionable woman close up directed by Wes Anderson.”
  • “A woman wearing an earring that’s a portal to another universe.”

The group, composed of editors from Cosmopolitan, members of artificial-intelligence research lab OpenAI, and a digital artist—Karen X. Cheng, the first “real-world” person granted access to the computer system they’re all using—are working together, with this system, to try to create the world’s first magazine cover designed by artificial intelligence.

DALL-E 2’s vision of a young woman holding a cocktail.

Sure, there have been other stabs. AI has been around since the 1950s, and many publications have experimented with AI-created images as the technology has lurched and leaped forward over the past 70 years. Just last week, The Economist used an AI bot to generate an image for its report on the state of AI technology and featured that image as an inset on its cover.

This Cosmo cover is the first attempt to go the whole nine yards.

But the portal-to-another-universe-earring thing isn’t working. “It looks like Mary Poppins,” says Mallory Roynon, creative director of Cosmopolitan, who appears unruffled by the fact that she’s directing an algorithm to assist with one of the more important functions of her job. (Nor should she be ruffled—more on that later.)

Back to something more basic then. Cheng types a fresh request into the text box: “1960s fashionable woman close up, encyclopedia-style illustration.” The AI thinks for 20 seconds. And then: Six high-quality illustrations of women, each unique, appear on the screen.

Six images that didn’t exist until right now

This technology is a creation of OpenAI called DALL-E 2. It’s an artificial intelligence that takes verbal requests from users and then, through its knowledge of hundreds of millions of images across all of human history, creates its own images—pixel by pixel—that are entirely new. Type “bear playing a violin on a stage” and DALL-E will make it for you, in almost any style you want. You can depict your ursine virtuoso “in watercolor,” “in the style of van Gogh,” or “in synthwave,” a style the Cosmo team favors for perhaps obvious reasons.

See: obvious reasons.

The results are shockingly good, which is why, since its limited release in April, DALL-E 2 has inspired both awe and trepidation from the people who have seen what it can do. The Verge declared that DALL-E “Could Power a Creative Revolution.” The Studio, a YouTube channel by tech reviewer Marques Brownlee, wondered, “Can AI Replace Our Graphic Designer?

By the end of the Zoom meeting, a cover is close. It’s taken less than an hour. This is wild to witness. And, yes, a little scary. And it raises serious questions far beyond the scope of magazine design: about art, about ethics, about our future.

Watching it work though? It makes your jaw drop.

This is an exclusive first look at DALL-E 2’s yet-to-be-released “outpainting” feature, which allows users to extend DALL-E’s images…and allows DALL-E to imagine the world beyond their borders.

DALL-E’s creators don’t like to anthropomorphize it, and for good reason—contemplating AI as an autonomous entity freaks people out. Just see the recent news about Google engineer Blake Lemoine, who was put on probation for claiming that his conversations with the company’s AI chatbot, LaMDA, proved it had a soul and should need to grant engineers permission before being experimented on. Most independent experts, as well as Google itself, were quick to dismiss the idea, pointing out that if AI seems human, it’s only because of the massive amounts of data that humans have fed it.

In fact, this kind of AI is fundamentally designed to imitate us. DALL-E is powered by a neural network, a type of algorithm that mimics the workings of the human brain. It “learns” what objects are and how they relate to each other by analyzing images and their human-written captions. DALL-E product manager Joanne Jang says it’s like showing a kid flash cards: If DALL-E sees a lot of pictures of koalas captioned “koala,” it learns what a koala looks like. And if you type “koala riding motorcycle,” DALL-E draws on what it knows about koalas, motorcycles, and the concept of riding to put together a logical interpretation. This understanding of relationships can be keen and contextual: Type “Darth Vader on a Cosmopolitan magazine cover” and DALL-E doesn’t just cut and paste a photo of Darth; it dresses him in a gown and gives him hot-pink lipstick.

The words are jumbled, by the way, because the current version of DALL-E was trained to prioritize artistry over language comprehension. For the real Cosmo cover, creative director Mallory Roynon placed the logo and coverlines herself.

All this represents a major breakthrough in AI, says Drew Hemment, a researcher and lead for the AI & Arts program at the Alan Turing Institute in London. “It is phenomenal, what they have achieved,” he says. There are many following suit: Last month, Google released a similar AI called Imagen, and a comparable generator called Midjourney, which The Economist used for its aforementioned cover image, was released in beta around the same time as DALL-E 2. There’s even a DALL-E “light,” now called Craiyon, made by the open-source community for public use.

That said, the technology is far from perfect. DALL-E is still in what OpenAI calls a “preview” phase, being released to just a thousand users a week as engineers continue to make tweaks. If you ask for something the model hasn’t seen before, for example, it’ll provide its best guess, which can be wacky. Despite the generally high quality of the images it renders, areas requiring finer details often turn out blurry or abstract. Perhaps most problematically, the majority of the people it renders, due to the biased data sets it’s seen, are white. And perhaps most surprisingly, it has a hard time figuring out how many fingers humans are supposed to have—to the machine, the number of fingers seems as arbitrary as the number of leaves on a tree.

But DALL-E is imperfect also by design. It’s intentionally bad at rendering photorealistic faces, instead generating wonky eyes or twisted lips on purpose in an effort to protect against the tech being used to make deepfakes or pornographic images, which disproportionately harm women.

These land mines are part of the reason OpenAI is releasing DALL-E slowly, so they can observe user behavior and refine its system of safeguards against misuse. For now, those safeguards include removing sexually explicit images from those hundreds of millions of images used to train the model, prohibiting and flagging the use of hate speech, and instituting a human review process. DALL-E also has a content policy that asks users to adhere to ethical guidelines, like not sharing any photorealistic faces DALL-E may accidentally generate and not removing the multicolored signature in the bottom right corner that indicates an image was made by AI. And there’s an ongoing effort to make the data set less biased and more diverse; in the month Cosmo spent poking around, results already started to yield more representative subjects.

Despite DALL-E’s limitations, intentional and otherwise, its small but growing number of users are forging ahead, posting images on social media at a fever pitch lately—playing around with DALL-E and its knockoffs and sharing thousands of their results, like this and this and this. OpenAI does eventually plan to monetize all this interest by charging users for access to its interface and intends to carefully position it as an artist’s tool, not her replacement—a “creative copilot,” as OpenAI’s Jang puts it. Codex, another of OpenAI’s innovations, writes software based on normal-language directives as opposed to coding lingo and has streamlined and democratized parts of the software development process as a result. In the same way, Jang says she sees DALL-E 2 streamlining essential parts of the creative process like mood-boarding and conceptualizing.

Experts I spoke to generally agreed that while fears of AI replacing visual artists are not totally unfounded, the technology will also create new opportunities and possibly entire new art forms. Independent UK-based AI art curator Luba Elliott says she also hopes it can bring more women to the field of AI-generated art, where they’re less represented.

Outtakes from the Cosmo cover, to the tune of “a strong female president astronaut warrior walking on the planet Mars, digital art synthwave.”

Cheng, the digital artist working with Cosmo, used DALL-E to make a music video for Nina Simone’s “Feeling Good” and is now using it to design a dress that bursts into geometric shapes when it’s viewed through an augmented reality filter. A video director by trade, Cheng says that in the past, she’s been limited as a visual artist because she can’t draw. “Now I have the power of all these different kinds of artists,” she says. DALL-E has become part of her day-to-day workflow and has drastically sped up her creative process.

“But I don’t want to sugarcoat it either,” Cheng wrote in an Instagram caption accompanying her music video. “With AI, we are about to enter a period of massive change in all fields, not just art. Many people will lose their jobs. At the same time, there will be an explosion of creativity and possibility and new jobs being created—many that we can’t even imagine right now.”

AI touches almost every part of our lives, from the electronic systems in our cars to the TikTok filters that give us Pamela Anderson eyebrows to our increasingly polarized social feeds and the proliferation of fake news. While AI itself is not new, “it is now a very powerful technology,” says Eduardo Alonso, director of the Artificial Intelligence Research Centre at City, University of London, and “we are starting to consider the ethical and legal impacts of what we are doing.” But technology tends to be a step ahead of the law, he says, so until the law catches up, it’s on the industry itself to set a code of conduct.

OpenAI’s stated mission is to work toward creating an artificial general intelligence (an AGI) that accomplishes two things: first, the ability to perform any task, not just the one’s it’s explicitly asked. That’s why tech like DALL-E is such a big step—it’s an attempt at giving AGI the sense of sight. “For an AGI to fully understand the world, it needs vision,” says Jang. “Up to now, we’ve taught it to be good at reasoning, but now it can look at things and we can incorporate visual reasoning.” This could pave the path for other senses, too, so that one day, Jang says, an AGI could process all the things a human can process. The second, and even loftier, goal? To create an AGI that “benefits all of humanity,” and the experts I spoke with seem to believe the company is genuinely committed to deploying AI responsibly. But others say that any AGI could have dangerous or even catastrophic consequences, like becoming a surveillance tool for authoritarian governments or becoming the operating system that enables autonomous weapons systems.

Ultimately, because a true AGI doesn’t yet exist, we still don’t and can’t know—but every day, whether we’re ready or not, we’re closer to finding out.

Back in the virtual conference room, the Cosmo and OpenAI group is tooling around with the cocktail cover idea, trying to put various miniature objects into the glass, like a sailboat or a tiny woman on a pool float. But the vision seems almost too surreal for DALL-E, which appears confused—“a woman taking a bubble bath in a martini glass” just generates a creepy face floating beneath the surface of the liquid.

Then DALL-E suggests a new idea everyone loves: putting a goldfish in the glass. It almost feels like the AI and the humans are riffing off each other.

DALL-E adds a goldfish.

An hour in, the team has stalled. The martini glass images look too clip-art-y to make for a satisfying Cosmo cover, and the deadline is nigh. (My deadline is nigher still, and I find myself wishing an AI would write my story.) When the group signs off, the fate of the cover feels uncertain.

The next morning, though, an email attachment in my inbox: an image of a decidedly feminine, decidedly fearless astronaut in an extraterrestrial landscape, striding toward the reader. It’s DALL-E’s interpretation of Cheng’s prompt from overnight, “wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art,” and it’s stunning.

The image encapsulates the reasons OpenAI wanted to work with Cosmo and Cosmo wanted to work with OpenAI—reasons Natalie Summers, an OpenAI communications rep who also runs its Artist Access program, put best in email after seeing the cover: “I believe there will be women who see this and a door will open for them to consider going into the AI and machine learning fields—or even just to explore how AI tools can enhance their work and their lives. Women will be better equipped to lead in this next chapter of what it means to coexist with, and determine the course of, increasingly powerful technology. That badass woman astronaut is how I feel right now: swaggering on into a future I am excited to be a part of.”

Members of the team futz with the image in DALL-E over the next 24 hours—Cheng uses an impressive experimental feature, not yet available to users, that draws on the context of the image to “extend” it to the correct cover proportions—and by the next day, Cosmo has a cover.

Observing this process, I think, This sure is a lot of human effort for an AI-generated magazine cover. My initial takeaway is that DALL-E truly is an artist’s tool—one that can’t create without the artist. Which might ultimately be the point.

At a family wedding in the midst of all this, I met a renowned backdrop painter, Sarah Oliphant. Over a 45-year career, Oliphant—an artist in the classical sense, one who paints with a brush and draws with a pen—has painted backdrops for some of the world’s most famous photographers, including Annie Liebovitz and Patrick Demarchelier. I told her about DALL-E and asked what did she think? When AI can write poetry and make art, what did that say about…well, art?

“All art is borrowed,” she said. “Every single thing that’s ever done in art—we’re all just mimicking and forging and copying the human experience.” Everything we’ve ever seen that’s been meaningful to us, she said, becomes the inspiration we draw from when we’re creating art.

A data set, if you will.

She showed me a painting she made as a wedding gift, a fantastically bizarre and lavish creation that depicted the groom as a plump baby sitting in a bird’s nest in a mystical forest, surrounded by sumptuous cakes and mischievous fairies, each of which has the face of the bride. To paint it, Oliphant referenced a photo of the man as a baby, a book of Victorian fairy paintings, and a photo of a bird’s nest—as well as, probably, every baby, picture of a fairy, and bird’s nest she’s seen in her life.

But what about the fact that DALL-E can generate art almost instantaneously? Does that make a difference?

Not to Oliphant. “It’s what art evokes in the viewer that makes it valuable, not how long it took,” she said. This painting took her nine months, but “if the computer can generate a piece of art that I look at and I’m overwhelmed by its beauty or what it evokes, or I see it as intrinsically fascinating, then that’s just as valuable.”

So she didn’t feel threatened?

Oliphant laughed. “I’d say to that computer, good luck,” she said. “Okay, computer, you try to paint a backdrop as beautiful as I can. Go for it.”

A few days after our conversation, I type into DALL-E’s text box: “baby wearing flower wreath in a mystical forest at night, surrounded by fairies and cakes.” I wait. I realize: I’m nervous.

When the images populate 20 seconds later, I actually say “whoa” out loud. They’re eerily similar in mood and composition to Oliphant’s, and all the elements are there…the baby’s rosy cheeks, the gossamer fairy wings. But they’re unsettling to see after the original, and I realize why: DALL-E is representing a different data set, different experiences, a different worldview.

To call it a tool understates its capabilities. It’s not merely a paintbrush that an artist can wield to express her whims directly—if it were, she’d be able to paint a woman taking a bubble bath in a martini glass. Instead, it brings something of its own.

I save one of the better versions of the baby with his head tilted skyward, hands outreached, and enlarge it on my screen. I lean in and search the image, trying to distinguish which part was imagined by the human and which part by the machine.


This article has been updated to reflect DALL-E Mini’s name change to Craiyon.
Saludos.

Los algoritmos de IA clásicos se basan en un algoritmo de poda de un árbol de decisiones. Al principio la máquina tiene todas las opciones posibles y va eliminando las erróneas a base de aprendizaje o de "intento-error".

Cuando la IA está optimizada, el sistema va a lo eficiente (a lo conocido), por lo menos en las primeras ramas del árbol. Esto es importante porque tras cierto número de iteraciones, para ciertos inputs, ciertas partes del árbol son invisibles.

Si se usa masivamente, los sesgos cognitivos del algorimto acabarán saltado a al ser humano. Teniendo en cuenta que este programa será usado masivamente por creadores de contenido (editores, márketing, etc.), todos acabaremos viendo lo que el programa decida qué es lo más representativo.

Un ejemplo, pongamos que tenemos 3 grupos no relacionados trabajándo simultáneamente con el programa:
- Grupo 1: la agencia de márketing del Comité Olímpico Internacional trabajando en las próximas olimpiadas
- Grupo 2: el departamento de márketing de Speedo (la marca de bañadores)
- Grupo 3: un periódico digital

Cuando le preguntan al programa que nos de una imagen representativa de una nadadora y el programa nos diga:



Pues al día siguiente por wokismo, por pereza o por falta de agallas para contradecir al todopoderoso algoritmo, tendremos al COI, a Speedo y a los periódicos informando de la natación femenina mostrándonos que la imagen de arriba es lo que mejor representa a una nadadora.

Y esto sin tener en cuenta manipulaciones para que el algoritmo no sea neutral.. no se sí me entienden.

Vamos a ver cosas chulísimas.

Benzino Napaloni

  • Espabilao
  • **
  • Gracias
  • -Dadas: 110
  • -Recibidas: 2151
  • Mensajes: 251
  • Nivel: 25
  • Benzino Napaloni Con poca relevanciaBenzino Napaloni Con poca relevancia
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2032 en: Julio 18, 2022, 23:46:21 pm »
Los algoritmos de IA clásicos se basan en un algoritmo de poda de un árbol de decisiones. Al principio la máquina tiene todas las opciones posibles y va eliminando las erróneas a base de aprendizaje o de "intento-error".

Cuando la IA está optimizada, el sistema va a lo eficiente (a lo conocido), por lo menos en las primeras ramas del árbol. Esto es importante porque tras cierto número de iteraciones, para ciertos inputs, ciertas partes del árbol son invisibles.

Si se usa masivamente, los sesgos cognitivos del algorimto acabarán saltado a al ser humano. Teniendo en cuenta que este programa será usado masivamente por creadores de contenido (editores, márketing, etc.), todos acabaremos viendo lo que el programa decida qué es lo más representativo.

Un ejemplo, pongamos que tenemos 3 grupos no relacionados trabajándo simultáneamente con el programa:
- Grupo 1: la agencia de márketing del Comité Olímpico Internacional trabajando en las próximas olimpiadas
- Grupo 2: el departamento de márketing de Speedo (la marca de bañadores)
- Grupo 3: un periódico digital

Cuando le preguntan al programa que nos de una imagen representativa de una nadadora y el programa nos diga:



Pues al día siguiente por wokismo, por pereza o por falta de agallas para contradecir al todopoderoso algoritmo, tendremos al COI, a Speedo y a los periódicos informando de la natación femenina mostrándonos que la imagen de arriba es lo que mejor representa a una nadadora.

Y esto sin tener en cuenta manipulaciones para que el algoritmo no sea neutral.. no se sí me entienden.

Vamos a ver cosas chulísimas.

Bueno, para ver a gente escudándose en lo que dice el ordenador tampoco ha sido necesario llegar a la IA :roto2: . He llegado a tener discusiones con uno de Correos porque el envío no le aparecía como recibido, y era una caja enorme que la estaba viendo desde el mostrador.

Volviendo a la IA, no sé si tú también estudiaste informática. Si la hiciste, te sonará que en la asignatura de IA nos enchufaban casi lo primero de todo el concepto de sobreaprendizaje al empezar a estudiar los árboles de decisión.

Imaginad que hacemos un árbol de decisión para pescar defraudadores a Hacienda, o cualquier cosa que se os ocurra que se quiera automatizar. Los árboles de decisión se entrenan con unos datos de los que ya se conoce el resultado que se quiere evaluar. Que has sido defraudador en el pasado, tanto porcentaje más de riesgo. O pillar variaciones anormales en las cantidades que se declaran, o lo que sea. El algoritmo de entrenamiento del árbol de decisión lo único que hace es repasar los datos de entrenamiento y buscar patrones.

Por supuesto, la validez de estos "datos de entrenamiento" es un punto muy débil del árbol de decisión. Si son insuficientes, sale un patrón equivocado. No digamos ya si encima están mal. Así está el Autopilot de Tesla que no arranca. Es tal la cantidad de información que habría que procesar, que los resultados que da son, con suerte, incompletos.

Pero el peor fallo que tiene este sistema es que no tiene capacidad de recalibrar conocimiento. Simplemente no puede. Si la realidad cambia, el árbol de decisión no se va a dar cuenta de que está evaluando mal. Hay que repetir todo el entrenamiento.


Y normalmente cuando el operador humano descubre el error, el daño ya está hecho:

La inteligencia artificial rechaza candidatos aptos en los procesos de selección, según Harvard Business School.

Citar
Si una persona lleva más de 6 meses sin trabajar, el algoritmo tiende a descartarla sin tener en cuenta factores como la baja por maternidad, enfermedades o incluso el contexto de pandemia en el que estamos inmersos. Muchas de esas causas podrían ser comprensibles por los empleadores humanos, pero parece que la IA no es tan considerada.

Para ilustrarlo, podemos citar un ejemplo que mencionó Joseph Fuller, uno de los autores del estudio. En dicho caso, varias enfermeras no consiguieron ser seleccionadas por los hospitales por no tener conocimientos de programación, lo que es un hecho bastante absurdo. La IA interpretó que esa habilidad era indispensable, ya que dicho puesto implicaba registrar los datos de los pacientes en un ordenador, pero, obviamente, para hacer eso no es necesario saber codificar, lo que dejó fuera a muchas candidatas válidas.


Dentro de lo malo, un punto para la esperanza. El personal es algo más consciente de errores groseros como éste. Cualquiera que haya usado un navegador en el coche sabe que no hay que hacer caso de todo lo que diga. Eso sí, queda aún mucho para que el personal se meta en la cabeza que un algoritmo sólo es un asistente y que la supervisión humana en el último paso sigue siendo necesaria.

Saturio

  • Netocrata
  • ****
  • Gracias
  • -Dadas: 834
  • -Recibidas: 23025
  • Mensajes: 2928
  • Nivel: 611
  • Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.Saturio Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2033 en: Julio 19, 2022, 10:59:27 am »
Hynkel.

La Charo al aparato.

No te puedes imaginar el bombardeo constante de publicidad que se recibe sobre sistemas automatizados de selección, people analitics y otras mierdas.
Esta publicidad tiene las formas tradicionales. Desde correos electrónicos que te mandan robots que han conseguido tu dirección mediante crawling, gente (no sé si son de verdad o no) intentando contactarte por Linkedin, invitaciones a seminarios y el genérico bombardeo ideológico a través de las redes sociales, fundamentalmente Linkedin.

El argumento de ventas es siempre el mismo: el miedo. Miedo a quedarte fuera o a convertirte en un dinosaurio. Miedo a que en algún encuentro alguien hable de las maravillas del sistema de PA que implantado en su empresa y tú te quedes mudo preguntándote qué cojones es eso.

¿Sabes cual es la mejor forma de reclutar gente para un negocio?.
Establecer relaciones con las instituciones de formación y hacer programas de prácticas serios y atractivos y meter a esa gente en la organización desde cero. No voy a enumerar las ventajas que tiene ese sistema porque da para casi un libro.
Evidentemente eso no soluciona casos puntuales de necesidades de gente senior. También tiene limitaciones según que tipo de empresa seas y tu capacidad de relacionarte con esos centros.

Sin embargo, te intentan vender que en algún sitio hay gente brillante y excelentísima que es difícil de alcanzar e identificar y necesitas un cañón de alcance mundial y claro para discernir entre tanto ruido, una IA (que dios sabe quién ha entrenado y con qué criterios).

A veces pienso que estamos intentando solucionar con una tecnología el ruido que esa misma tecnología ha creado. Como muy bien alguien decía el otro día, hoy puedes "aplicar" a una oferta haciendo un clik mientras ejecutas un número 2 en el WC, sin leerte la oferta y subiendo un cv sin actualizar desde hace dos años (Y ahora, con esos mimbres, ejecuta una IA para encontrar candidatos, no te jode.)

En el marketing se ve algo parecido. Parece que la única forma de vender es a través de un "embudo" cuya boca tiene el tamaño del planeta tierra y a través de triquiñuelas en las redes sociales llegar al final del embudo. Si todo el mundo hace eso, todos acabamos siendo objetivo de las acciones inicales del embudo y la saturación es brutal.



 

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 18511
  • -Recibidas: 24788
  • Mensajes: 2928
  • Nivel: 401
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2034 en: Julio 19, 2022, 16:09:58 pm »
https://www.expansion.com/empresas/distribucion/2022/07/16/62d1c74d468aeb4e4d8b462b.html







Saludos.
Esto no lo acabo de ver. Tiene una enormidad de problemas prácticos, uno de los cuales, probablemente el mayor, es lo enormemente caro que resultará respecto a llevarlo todo en una furgoneta sin más. Además no permite economías de escala, es un dron por paquete. El transporte por aire es MUY ineficiente comparado con las ruedas, los drones pueden llevar un peso muy limitado, con una autonomía igualmente muy limitada, y en medio de un encarecimiento de la energía como que no tiene ningún sentido.

Eso por no mencionar lo engañoso de la propia noticia, que en un lugar dice que "lanza un servicio a gran escala" para acto seguido decir que comenzará con una prueba piloto en un pueblo de 4.000 habitantes, que no es lo mismo.
Nos ha jodido, es que la cosa no va a poder escalar como venden, ni poder llevarte toda la paquetería que en su mayoría será demasiado pesada para ello, que esto no es como hacer granjas de servidores.

Por supuesto, la prensa económica se lo traga todo hasta atrás y de nuevo vende "futuro" (sea lo que sea eso). En los 70 el futuro era que todo tuviese muchas lucecitas y que todo tuviera en el nombre el número 2000, y en el año 2022 el futuro es que te traigan cosas volando en drones (recordemos que drones = futuro) aunque no tenga sentido.

La realidad será que esto se usará de forma limitada sólo para unas pocos casos que tengan sentido. Lo de que esto vaya a ser "a gran escala" es otra tontería del calibre de los taxis aereos (que por cierto ya existen y se llaman helicópteros, y no vuelan por ciudad por motivos obvios).

Eso sí, como reclamo de flipao-marketing es excelente. Buen publirreportaje de Amazon.

Esto seguirá más o menos igual hasta que se consiga hacer un tipo de batería eléctrica mucho más ligera que las de ahora y con mucha más capacidad. De momento, no tiene pinta de que se vaya a lograr lo de bajarle el peso.
« última modificación: Julio 19, 2022, 16:23:18 pm por pollo »

pollo

  • Administrator
  • Netocrata
  • *****
  • Gracias
  • -Dadas: 18511
  • -Recibidas: 24788
  • Mensajes: 2928
  • Nivel: 401
  • pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.pollo Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2035 en: Julio 19, 2022, 16:41:50 pm »
El permiso que les han concedido les permite circular entre las 10 p.m y las 6 a.m., y aunque les han dado vía libre para ofrecer el servicio al público, por ahora las pruebas las han limitado a empleados de Cruise.

Sea como fuere, como comenta Kyle Vogt al principio del vídeo:

Citar
Just imagining what other people are thinking right now. Like" we're in San Francisco". So I don't think people have seen a car going around pulling up at traffic lights, when you make eye contact and look over there's no one there, that's gotta be as crazy for the people at the road tonight.as it is gonna be for me riding in one of this things.

Citar
GM's Cruise starts testing fully driverless taxi rides in San Francisco
Cruise co-founder Kyle Vogt took the first ride.
Steve Dent @stevetdent | November 4th, 2021


GM's self-driving Cruise division has launched its fully driverless robo-taxi service in San Francisco, with co-founder and President Kyle Vogt getting the first ride, TechCrunch reported. To start with, the service will be offered only to GM employees, as it's still only licensed for testing.

"Earlier this week, I requested a ride through our Cruise app and took several back-to-back rides in San Francisco — with no one else in the vehicle," Vogt wrote in a YouTube video description. "There are lots of other Cruise employees (not just me) who are testing and refining the full customer experience as we take another major step toward the first commercial AV [ride hailing] product in a dense urban environment."



Vogt said the Cruise launched the Bolt vehicles on Monday at 11PM, and it "began to roam around the city, waiting for a ride request." He got his first ride from a Cruise Bolt EV called "Sourdough," saying the experience was "smooth." A separate video showed sections before and after the vehicle picked up passengers while it was in "ghost mode" with no one in it.

Early last month, Cruise received a California DMV permit to operate the service between the hours of 10PM and 6AM at a maximum speed of 30 MPH in mild weather conditions (no worse than light rain and fog). It's allowed to run them without drivers and charge for delivery services, but not ride-hailing. For paid robo-taxi rides, it must apply for a final permit with the California Public Utilities Commission.

GM recently launched its "Ultra Cruise" system for passenger vehicles, promising that it will "ultimately enable hands-free driving in 95 percent of all driving scenarios." The company has spent 10 million miles testing the system, and its previous Super Cruise has generally garnered positive reviews compared to rival systems like Tesla's Autopilot. 

Update 11/4/2021 1:19 PM ET: Cruise told Engadget that it is currently only offering fully driverless rides to employees right now. While it is allowed under its testing permit to offer free rides to the public as well, it's not doing so yet. The headline has been changed to emphasize that the program is still in the testing phase, and a reference to rides being available to certain members of the public has been removed.
Saludos.

Seguimiento de la situación de Cruise. Desde que a finales de junio se pusiesen a funcionar cobrando a los pasajeros, han tenido una serie de incidentes, digamos menores o anecdóticos.

Este es algo más curioso. Según parece los taxis autónomos decidieron juntarse en una esquina y quedarse parados. Teóricamente se pueden controlar remotamente cuando se empanan, pero en este caso los empleados de cruise tuvieron que ir y sacarlos conduciendo manualmente los coches.

A los mal pensados esto les hace sospechar que los coches de cruise no son en absoluto autónomos y que son conducidos en remoto (los sensores y toda la parafernalia ayudarían al conductor remoto). La red perdió el contacto con los coches y los conductores remotos no podían conducirlos. Podría ser.

Several Cruise AVs Stop, Block SF Traffic

https://www.adaptautomotive.com/articles/1931-several-cruise-avs-stop-block-sf-traffic
Tiene toda la pinta de ser así (conductores humanos cubriendo los casos en los que la programación no sabe qué hacer), aunque bueno, es un problema para ellos mismos si venden una moto que no tienen.

Creo que fue en este mismo foro que se colgó un artículo en el que se mencionaba la realidad detrás de muchas IA: mucha gente tras el teclado apagando fuegos.
Que oye, es una forma legítima de trabajar y seguro que ahorra un montón de trabajo rutinario, pero no es lo que venden.

Benzino Napaloni

  • Espabilao
  • **
  • Gracias
  • -Dadas: 110
  • -Recibidas: 2151
  • Mensajes: 251
  • Nivel: 25
  • Benzino Napaloni Con poca relevanciaBenzino Napaloni Con poca relevancia
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2036 en: Julio 19, 2022, 17:50:18 pm »
Hynkel.

La Charo al aparato.

No te puedes imaginar el bombardeo constante de publicidad que se recibe sobre sistemas automatizados de selección, people analitics y otras mierdas.
Esta publicidad tiene las formas tradicionales. Desde correos electrónicos que te mandan robots que han conseguido tu dirección mediante crawling, gente (no sé si son de verdad o no) intentando contactarte por Linkedin, invitaciones a seminarios y el genérico bombardeo ideológico a través de las redes sociales, fundamentalmente Linkedin.

El argumento de ventas es siempre el mismo: el miedo. Miedo a quedarte fuera o a convertirte en un dinosaurio. Miedo a que en algún encuentro alguien hable de las maravillas del sistema de PA que implantado en su empresa y tú te quedes mudo preguntándote qué cojones es eso.

¿Sabes cual es la mejor forma de reclutar gente para un negocio?.
Establecer relaciones con las instituciones de formación y hacer programas de prácticas serios y atractivos y meter a esa gente en la organización desde cero. No voy a enumerar las ventajas que tiene ese sistema porque da para casi un libro.
Evidentemente eso no soluciona casos puntuales de necesidades de gente senior. También tiene limitaciones según que tipo de empresa seas y tu capacidad de relacionarte con esos centros.

Sin embargo, te intentan vender que en algún sitio hay gente brillante y excelentísima que es difícil de alcanzar e identificar y necesitas un cañón de alcance mundial y claro para discernir entre tanto ruido, una IA (que dios sabe quién ha entrenado y con qué criterios).

A veces pienso que estamos intentando solucionar con una tecnología el ruido que esa misma tecnología ha creado. Como muy bien alguien decía el otro día, hoy puedes "aplicar" a una oferta haciendo un clik mientras ejecutas un número 2 en el WC, sin leerte la oferta y subiendo un cv sin actualizar desde hace dos años (Y ahora, con esos mimbres, ejecuta una IA para encontrar candidatos, no te jode.)

En el marketing se ve algo parecido. Parece que la única forma de vender es a través de un "embudo" cuya boca tiene el tamaño del planeta tierra y a través de triquiñuelas en las redes sociales llegar al final del embudo. Si todo el mundo hace eso, todos acabamos siendo objetivo de las acciones inicales del embudo y la saturación es brutal.

Pues ese bombardeo para algunos llega tarde.

Es un problema de concepto muy simple. Las empresas se han acostumbrado a quemar y rotar plantilla. La idea de ir a la "cantera" a coger a los mejores y cuidarlos, durante años dejó de verse necesaria. Había tantos candidatos que la selección hasta se podía hacer tirando un dado.

En el momento en que los trabajadores, al menos los experimentados, el problema de la selección se ha vuelto completamente diferente. Sigue vendiendo el camelo de la IA porque muchas empresas aún no se han enterado de la triste realidad: que son ellos los que tienen que buscar a los buenos, o formar ellos a la cantera, y que se acabó la mentalidad de "supermercado" e ir a por trabajadores como quien va a por patatas.


A mí a estas alturas no es algo que me preocupe demasiado, va a caer por su propio peso porque no da los resultados que promete. Como salía en el artículo sobre USA, algunas empresas van descubriendo que han descartado candidatos perfectamente válidos, y cada vez van a poder permitirse menos un error así.


Tiene toda la pinta de ser así (conductores humanos cubriendo los casos en los que la programación no sabe qué hacer), aunque bueno, es un problema para ellos mismos si venden una moto que no tienen.

Creo que fue en este mismo foro que se colgó un artículo en el que se mencionaba la realidad detrás de muchas IA: mucha gente tras el teclado apagando fuegos.
Que oye, es una forma legítima de trabajar y seguro que ahorra un montón de trabajo rutinario, pero no es lo que venden.

También es el mercado el que "demanda" esas soluciones. La IA, y la informatización en general hacen lo que dices, ahorran trabajo rutinario y la mayoría de los casos que son sota caballo y rey. Pero eso tiene un problema, al tarugo común le quita el trabajo que sabe hacer, y le deja los problemas más gordos que pocos saben resolver.

Harán falta muchos palos para que se vea que la digitalización no es la Purga de Benito para resolver todos los problemas, va a exigir muuucho trabajo y mucha organización. De todos modos hay más digitalización de la que parece. Nos pasa el encierro de hace dos años en los 80, y el país se hunde.

Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2037 en: Agosto 14, 2022, 09:40:49 am »
Citar
Baidu's robotaxis can now operate without a safety driver in the car
The company says it's running the first fully driverless service in China.

K. Holt @krisholt | August 7, 2022

Baidu

Baidu has obtained permits to run a fully driverless robotaxi service in China. It says it's the first company in the country to obtain such permissions. Back in April, Baidu received approval to run an autonomous taxi service in Beijing, as long as there was a human operator in the driver or front passenger seat. Now, it will be able to offer a service where the car's only occupants are passengers.

There are some limits to the permits. Driverless Apollo Go vehicles will ferry paying passengers around designated zones in Wuhan and Chongqing during daytime hours only. The service areas cover 13 square kilometers in Wuhan's Economic & Technological Development Zone (WHDZ) and 30 square kilometers in Chongqing’s Yongchuan District. The WHDZ has been overhauled over the last year to support AV testing and operations.

Baidu says its robotaxis have multiple safety measures to back up the core autonomous driving functions. Those include monitoring redundancy, remote driving capability and a safety operation system.

This is a notable step forward for Baidu as it looks to offer robotaxi services at a large scale. The company has also been testing its vehicles in the US for several years and it could ultimately prove a competitor to the likes of Waymo and Cruise.
Saludos.

Cadavre Exquis

  • Inmoindultado
  • ****
  • Gracias
  • -Dadas: 9295
  • -Recibidas: 15178
  • Mensajes: 2117
  • Nivel: 186
  • Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.Cadavre Exquis Sus opiniones inspiran a los demás.
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2038 en: Agosto 28, 2022, 19:45:15 pm »

Deckard

  • Desorientado
  • *
  • Gracias
  • -Dadas: 15
  • -Recibidas: 111
  • Mensajes: 6
  • Nivel: 2
  • Deckard Sin influencia
    • Ver Perfil
Re:El fin del trabajo
« Respuesta #2039 en: Agosto 29, 2022, 09:53:33 am »
No hay un avance tan grande como se dice en la automatización de muchas tareas empresariales.

Desconozco el panorama en España, pero en el mundo anglo muchas multinacionales que se jactan de su tecnología, de ser la vanguardia en AI (hablo de empresas grandes metidas en IT, finanzas, etc) lo que hacen es tener analistas en India (ya sea propios o subcontratados) haciendo lo que en teoría sus maravillos sistemas de business intelligence y super-inteligentes algoritmos de machine learning deberían hacer.

Y la cuestión no es solo que al final las decisiones y conclusiones, la gestión de excepciones, etc deba pasar por un humano, ni tampoco que no tengan la tecnología (aunque desde luego no está tan avanzada como quieren vendernos).

El principal problema son los costes. ¿Qué es mas barato en términos de coste: tener un equipo en el extranjero con costes laborales ínfimos en comparación a los occidentales, buena educación y que hablan inglés o mantener un equipo mucho más pequeño en USA o UK pero cuyos empleados van a tener que estar mucho mejor pagados no ya en comparación con los de India sino incluso con los empleados anteriores en Occidente?

Creo que vamos hacia mayour automatización pero muchos estan vendiéndonos que "ya está aquí" cuando en realidad la adaptación de las tecnologías a la realidad empresarial, e incluso a la tecnología en sí misma, le quedan bastantes años de progreso.

Y personalmente, no veo "el fin del trabajo". Lo que veo es un cambio en el tipo de trabajo que se demandará.

Tags: 21 horas 
 


SimplePortal 2.3.3 © 2008-2010, SimplePortal