Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 2 Visitantes están viendo este tema.
https://edition.cnn.com/2023/03/29/tech/ai-letter-elon-musk-tech-leadersCitarElon Musk and other tech leaders call for pause in ‘out of control’ AI raceSome of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”Elon Musk was among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”If a pause is not put in place soon, the letter said governments should step in and create a moratorium.The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.
Elon Musk and other tech leaders call for pause in ‘out of control’ AI raceSome of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”Elon Musk was among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”If a pause is not put in place soon, the letter said governments should step in and create a moratorium.The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.
'Pausing AI Developments Isn't Enough. We Need To Shut It All Down'Posted by BeauHD on Wednesday March 29, 2023 @10:02PM from the time-to-get-serious dept.Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field."This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece:CitarThe key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.You can read the full letter signed by AI leaders here.
The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
Traigo a este hilo el post que publicó ayer Derby en el hilo de pp.cc. para añadir la vuelta de tuerca que se ha dado hoy sobre el tema...Cita de: Derby en Marzo 29, 2023, 22:26:03 pmhttps://edition.cnn.com/2023/03/29/tech/ai-letter-elon-musk-tech-leadersCitarElon Musk and other tech leaders call for pause in ‘out of control’ AI raceSome of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”Elon Musk was among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”If a pause is not put in place soon, the letter said governments should step in and create a moratorium.The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.Citar'Pausing AI Developments Isn't Enough. We Need To Shut It All Down'Posted by BeauHD on Wednesday March 29, 2023 @10:02PM from the time-to-get-serious dept.Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field."This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece:CitarThe key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.You can read the full letter signed by AI leaders here.Que digo yo que, por alguna razón, mucha gente se está poniendo muy nerviosa con el tema, y yo solo veo dos opciones:1. Realmente están asustados con la posibilidad de que se produzca un fast takeoff de la IA que pueda suponer un peligro existencial para la humanidad2. Hay mucho miedo a las consecuencias que la IA, incluso en el estado embrionario en el que se encuentra ahora mismo, pueda tener sobre la sociedad (e.g. incremento del desempleo a un ritmo nunca visto antes, destrucción salvaje de la competencia por disrucción tecnológica, etc.)Allá por 2014 Brian Tomasik publicó esta entrada en su blog Reducing Suffering de la que extraigo esta gráfica sobre lo las predicciones de gente más o menos conocida sobre como sería el takeoff the la AGI en función de sus años de experiencia creando o gestionando el desarrollo de software comercial:Esta otra gráfica sacada de este post que data de abril de 2017 de la web de una empresa de desarrollo de software llamada Altexsoft representa la predicción sobre cuando se producirá la llegada de la AGI tanto por gente de a pié como por expertos según el año en el que formulraon su predicción:Lo que está ocurriendo estos días me ha hecho recordar las dos estupendas entradas que Tim Urban publicó a principios de 2015 en la archiconocida web Wait But Why sobre el tema; si no los leyeron entonces creo que no existe un momento más adecuado para hacerlo que ahora.The AI Revolution: The Road to SuperintelligenceThe AI Revolution: Our Immortality or ExtinctionSaludos.
Lo que está ocurriendo estos días me ha hecho recordar las dos estupendas entradas que Tim Urban publicó a principios de 2015 en la archiconocida web Wait But Why sobre el tema; si no los leyeron entonces creo que no existe un momento más adecuado para hacerlo que ahora.The AI Revolution: The Road to SuperintelligenceThe AI Revolution: Our Immortality or ExtinctionSaludos.
Italia bloquea el uso de ChatGPTItalia dispuso este viernes el bloqueo «con efecto inmediato» de la tecnología de Inteligencia Artificial ChatGPT, de la tecnológica estadounidense OpenAI, acusándola de no respetar la ley de protección de datos de los consumidores.El garante italiano para la Protección de Datos Personales aseguró en un comunicado que ha abierto una investigación y que, entretanto, el bloqueo se mantendrá hasta que ChatGPT «no respete la disciplina de la privacidad».El popular ChatGPT ha sido desarrollado por la compañía OpenIA de Estados Unidos, donde varias organizaciones han pedido también su supensión por recelar de estos experimentos con Inteligencia Artificial.
Ready to have your mind blown?You can now flawlessly translate dialogue on any video.This AI tool will seamlessly translate and synchronize your videos into any language.Think videos ads or any video content!Now, you have a global reach.4:45 PM · Feb 14, 2023·2.5M Views
La caja de pandora se ha abierto aunque sea primero en Windows 95
XGracias WolfgangK https://twitter.com/mrgreen/status/1625521497065369606CitarReady to have your mind blown?You can now flawlessly translate dialogue on any video.This AI tool will seamlessly translate and synchronize your videos into any language.Think videos ads or any video content!Now, you have a global reach.4:45 PM · Feb 14, 2023·2.5M Views
¿Habéis oido hablar alguna vez del español neutro? Que es lo habitual en doblajes y demás.
Cita de: sudden and sharp en Abril 04, 2023, 10:26:56 am¿Habéis oido hablar alguna vez del español neutro? Que es lo habitual en doblajes y demás.Prefiero el doblaje latino al español neutro (aunque aún mejor el doblaje al español-castellano).También yo doblaría a la mayoría de los actores patrios, del español al español, pero con buena dicción (supongo que habrán observado que en una película de Hollywood con doblaje castellano suena mucho mejor nuestro idioma que en una película española original -.que hay que tener valor para ver, en cualquier caso.-).Y otra cosa, tanto rollo con la IA, y la cordura artificial, ¿pá cuando? (me autorrespondo: ni está ni se la espera).