Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 2 Visitantes están viendo este tema.
Usan Visual Studio Code. Curioso.
Google CEO Issues Rallying Cry in Internal Memo: All Hands on Deck To Test ChatGPT Competitor BardPosted by msmash on Tuesday February 07, 2023 @11:40AM from the all-hands-on-deck dept.Google CEO Sundar Pichai told employees Monday the company is going to need all hands on deck to test Bard, its new ChatGPT rival. From a report:CitarHe also said Google will soon be enlisting help from partners to test an application programming interface, or API, that would let others access the same underlying technology. The internal memo came shortly after Pichai publicly announced Google's new conversation technology, powered by artificial intelligence, which it will begin rolling out in the coming weeks. Google has faced pressure from investors and employees to compete with ChatGPT, a chatbot from Microsoft-backed OpenAI, which took the public by storm when it launched late last year."Next week, we'll be enlisting every Googler to help shape Bard and contribute through a special company-wide dogfood," Pichai wrote in the email to employees that was viewed by CNBC. "We're looking forward to getting all of your feedback -- in the spirit of an internal hackathon -- more details coming soon," he wrote. Microsoft is reportedly planning to launch a version of its own search engine, Bing, that will use ChatGPT to answer users' search queries. Microsoft is holding its own event Tuesday with participation from OpenAI CEO Sam Altman. "It's early days, we need to ship and iterate and we have a lot of hard and exciting work ahead to build these technologies into our products and continue bringing the best of Google Al to improve people's lives," Pichai wrote in his note to employees Monday. "We've been approaching this effort with an intensity and focus that reminds me of early Google -- so thanks to everyone who has contributed."
He also said Google will soon be enlisting help from partners to test an application programming interface, or API, that would let others access the same underlying technology. The internal memo came shortly after Pichai publicly announced Google's new conversation technology, powered by artificial intelligence, which it will begin rolling out in the coming weeks. Google has faced pressure from investors and employees to compete with ChatGPT, a chatbot from Microsoft-backed OpenAI, which took the public by storm when it launched late last year."Next week, we'll be enlisting every Googler to help shape Bard and contribute through a special company-wide dogfood," Pichai wrote in the email to employees that was viewed by CNBC. "We're looking forward to getting all of your feedback -- in the spirit of an internal hackathon -- more details coming soon," he wrote. Microsoft is reportedly planning to launch a version of its own search engine, Bing, that will use ChatGPT to answer users' search queries. Microsoft is holding its own event Tuesday with participation from OpenAI CEO Sam Altman. "It's early days, we need to ship and iterate and we have a lot of hard and exciting work ahead to build these technologies into our products and continue bringing the best of Google Al to improve people's lives," Pichai wrote in his note to employees Monday. "We've been approaching this effort with an intensity and focus that reminds me of early Google -- so thanks to everyone who has contributed."
Microsoft Announces New Bing and Edge Browser Powered by Upgraded ChatGPT AIPosted by msmash on Tuesday February 07, 2023 @01:19PM from the search-war-2 dept.Microsoft has announced a new version of its search engine Bing, powered by an upgraded version of the same AI technology that underpins chatbot ChatGPT. The company is launching the product alongside an upgraded version of its Edge browser, promising that the two will provide a new experience for browsing the web and finding information online. The Verge:Citar"It's a new day in search," said Microsoft CEO Satya Nadella at an event announcing the product. We're currently following the event live, and adding more information to this story as we go. Microsoft argued that the search paradigm hasn't changed in 20 years and that roughly half of all searches don't answer users' questions. The arrival of conversational AI can change this, says the company, delivering information more fluidly and quickly. The "new Bing," as Microsoft is calling it, offers a "chat" function, where users can ask questions and receive answers from the latest version AI language model built by OpenAI.TechCrunch adds:CitarAs expected, the new Bing now features the option to start a chat in its toolbar, which then brings you to a ChatGPT-like conversational experience. One major point to note here is that while OpenAI's ChatGPT bot was trained on data that only covers to 2021, Bing's version is far more up-to-date and can handle queries related to far more recent events.Another important feature here -- and one that I think we'll see in most of these tools -- is that Bing cites its sources and links to them in a "learn more" section at the end of its answers. Every result will also include a feedback option. It's also worth stressing that the old, link-centric version of Bing isn't going away. You can still use it just like before, but now enhanced with AI. Microsoft stressed that it is using a new version of GPT that is able to provide more relevant answers, annotate these and provide up-to-date results, all while providing a safer user experience. It calls this the Prometheus model.Further reading: Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web (Microsoft blog).
"It's a new day in search," said Microsoft CEO Satya Nadella at an event announcing the product. We're currently following the event live, and adding more information to this story as we go. Microsoft argued that the search paradigm hasn't changed in 20 years and that roughly half of all searches don't answer users' questions. The arrival of conversational AI can change this, says the company, delivering information more fluidly and quickly. The "new Bing," as Microsoft is calling it, offers a "chat" function, where users can ask questions and receive answers from the latest version AI language model built by OpenAI.
As expected, the new Bing now features the option to start a chat in its toolbar, which then brings you to a ChatGPT-like conversational experience. One major point to note here is that while OpenAI's ChatGPT bot was trained on data that only covers to 2021, Bing's version is far more up-to-date and can handle queries related to far more recent events.Another important feature here -- and one that I think we'll see in most of these tools -- is that Bing cites its sources and links to them in a "learn more" section at the end of its answers. Every result will also include a feedback option. It's also worth stressing that the old, link-centric version of Bing isn't going away. You can still use it just like before, but now enhanced with AI. Microsoft stressed that it is using a new version of GPT that is able to provide more relevant answers, annotate these and provide up-to-date results, all while providing a safer user experience. It calls this the Prometheus model.
¿Podría una inteligencia artificial ganar la medalla Fields de matemáticas?«Debemos ser críticos con los resultados que los motores nos devuelven: no son ciertos por muy bien explicados que estén»https://theobjective.com/tecnologia/2023-02-17/inteligencia-artificial-medalla-fields-matematicas/
ChatGPT, pá las Fields Medals: Citar¿Podría una inteligencia artificial ganar la medalla Fields de matemáticas?«Debemos ser críticos con los resultados que los motores nos devuelven: no son ciertos por muy bien explicados que estén»https://theobjective.com/tecnologia/2023-02-17/inteligencia-artificial-medalla-fields-matematicas/En fin, ni el más arrastrado de mis alumnos en las terribles sesiones de reclamaciones de un examen de matemáticas en la Universidad tendría la desvergüenza de las respuestas que da éste programa.
Pedimos calcular el valor del polinomio en ese valor, y, como el resultado es distinto de cero, le hacemos ver que no puede ser una raíz. Lo entiende y se disculpa como podemos ver a continuación:
Electric Vehicles Can Now Power Your Home for Three DaysPosted by EditorDavid on Saturday February 18, 2023 @03:34PM from the charging-ahead dept.There may soon come a time when your car "also serves as the hub of your personal power plant," writes the Washington Post's climate columnist. And then they tell the story of a New Mexico man named Nate Graham who connected a power strip and a $150 inverter to his Chevy Bolt EV during a power outage:CitarThe Bolt's battery powered his refrigerator, lights and other crucial devices with ease. As the rest of his neighborhood outside Albuquerque languished in darkness, Graham's family life continued virtually unchanged. "It was a complete game changer making power outages a nonissue," says Graham, 35, a manager at a software company. "It lasted a day-and-a-half, but it could have gone much longer." Today, Graham primarily powers his home appliances with rooftop solar panels and, when the power goes out, his Chevy Bolt. He has cut his monthly energy bill from about $220 to $8 per month. "I'm not a rich person, but it was relatively easy," says Graham "You wind up in a magical position with no [natural] gas, no oil and no gasoline bill."Graham is a preview of what some automakers are now promising anyone with an EV: An enormous home battery on wheels that can reverse the flow of electricity to power the entire home through the main electric panel. Beyond serving as an emissions-free backup generator, the EV has the potential of revolutionizing the car's role in American society, transforming it from an enabler of a carbon-intensive existence into a key step in the nation's transition into renewable energy.Some crucial context from the article:Since 2000, the number of major outages in America's power grid "has risen from less than two dozen to more than 180 per year, based on federal data, the Wall Street Journal reports... Residential electricity prices, which have risen 21 percent since 2008, are predicted to keep climbing as utilities spend more than $1 trillion upgrading infrastructure, erecting transmission lines for renewable energy and protecting against extreme weather."About 8% of U.S. homeowners have installed solar panels, and "an increasing number are adding home batteries from companies such as LG, Tesla and Panasonic... capable of storing energy and discharging electricity."Ford's "Lightning" electrified F-150 "doubles as a generator... Instead of plugging appliances into the truck, the truck plugs into the house, replacing the grid.""The idea is companies like Sunrun, along with utilities, will recruit vehicles like the F-150 Lightning to form virtual power plants. These networks of thousands or millions of devices can supply electricity during critical times."
The Bolt's battery powered his refrigerator, lights and other crucial devices with ease. As the rest of his neighborhood outside Albuquerque languished in darkness, Graham's family life continued virtually unchanged. "It was a complete game changer making power outages a nonissue," says Graham, 35, a manager at a software company. "It lasted a day-and-a-half, but it could have gone much longer." Today, Graham primarily powers his home appliances with rooftop solar panels and, when the power goes out, his Chevy Bolt. He has cut his monthly energy bill from about $220 to $8 per month. "I'm not a rich person, but it was relatively easy," says Graham "You wind up in a magical position with no [natural] gas, no oil and no gasoline bill."Graham is a preview of what some automakers are now promising anyone with an EV: An enormous home battery on wheels that can reverse the flow of electricity to power the entire home through the main electric panel. Beyond serving as an emissions-free backup generator, the EV has the potential of revolutionizing the car's role in American society, transforming it from an enabler of a carbon-intensive existence into a key step in the nation's transition into renewable energy.
IBM Says It's Been Running a Cloud-Native, AI-Optimized Supercomputer Since MayPosted by EditorDavid on Sunday February 19, 2023 @01:34PM from the head-in-the-clouds dept."IBM is the latest tech giant to unveil its own "AI supercomputer," this one composed of a bunch of virtual machines running within IBM Cloud," reports the Register:CitarThe system known as Vela, which the company claims has been online since May last year, is touted as IBM's first AI-optimized, cloud-native supercomputer, created with the aim of developing and training large-scale AI models. Before anyone rushes off to sign up for access, IBM stated that the platform is currently reserved for use by the IBM Research community. In fact, Vela has become the company's "go-to environment" for researchers creating advanced AI capabilities since May 2022, including work on foundation models, it said.IBM states that it chose this architecture because it gives the company greater flexibility to scale up as required, and also the ability to deploy similar infrastructure into any IBM Cloud datacenter around the globe. But Vela is not running on any old standard IBM Cloud node hardware; each is a twin-socket system with 2nd Gen Xeon Scalable processors configured with 1.5TB of DRAM, and four 3.2TB NVMe flash drives, plus eight 80GB Nvidia A100 GPUs, the latter connected by NVLink and NVSwitch. This makes the Vela infrastructure closer to that of a high performance compute site than typical cloud infrastructure, despite IBM's insistence that it was taking a different path as "traditional supercomputers weren't designed for AI."It is also notable that IBM chose to use x86 processors rather than its own Power 10 chips, especially as these were touted by Big Blue as being ideally suited for memory-intensive workloads such as large-model AI inferencing.Thanks to Slashdot reader guest reader for sharing the story.
The system known as Vela, which the company claims has been online since May last year, is touted as IBM's first AI-optimized, cloud-native supercomputer, created with the aim of developing and training large-scale AI models. Before anyone rushes off to sign up for access, IBM stated that the platform is currently reserved for use by the IBM Research community. In fact, Vela has become the company's "go-to environment" for researchers creating advanced AI capabilities since May 2022, including work on foundation models, it said.IBM states that it chose this architecture because it gives the company greater flexibility to scale up as required, and also the ability to deploy similar infrastructure into any IBM Cloud datacenter around the globe. But Vela is not running on any old standard IBM Cloud node hardware; each is a twin-socket system with 2nd Gen Xeon Scalable processors configured with 1.5TB of DRAM, and four 3.2TB NVMe flash drives, plus eight 80GB Nvidia A100 GPUs, the latter connected by NVLink and NVSwitch. This makes the Vela infrastructure closer to that of a high performance compute site than typical cloud infrastructure, despite IBM's insistence that it was taking a different path as "traditional supercomputers weren't designed for AI."It is also notable that IBM chose to use x86 processors rather than its own Power 10 chips, especially as these were touted by Big Blue as being ideally suited for memory-intensive workloads such as large-model AI inferencing.
Why we built an AI supercomputer in the cloudIntroducing Vela, IBM’s first AI-optimized, cloud-native supercomputer.AI models are increasingly pervading every aspect of our lives and work. With each passing year, more complex models, new techniques, and new use cases require more compute power to meet the growing demand for AI.One of the most pertinent recent examples has been the advent of foundation models, AI models trained on a broad set of unlabeled data that can be used for many different tasks — with minimal fine-tuning. But these sorts of models are massive, in some cases exceeding billions of parameters. To train models at this scale, you need supercomputers, systems comprised of many powerful compute elements working together to solve big problems with high performance.Traditionally, building a supercomputer has meant bare metal nodes, high-performance networking hardware (like InfiniBand, Omnipath, and Slingshot), parallel file systems, and other items usually associated with high-performance computing (HPC). But traditional supercomputers weren’t designed for AI; they were designed to perform well on modeling or simulation tasks, like those defined by the US national laboratories, or other customers looking to fulfill a certain need.While these systems do perform well for AI, and many “AI supercomputers” (such as the one built for OpenAI) continue to follow this pattern, the traditional design point has historically driven technology choices that increase cost and limit deployment flexibility. We’ve recently been asking ourselves: what system would we design if we were exclusively focused on large-scale AI?This led us to build IBM’s first AI-optimized, cloud-native supercomputer, Vela. It has been online since May of 2022, housed within IBM Cloud, and is currently just for use by the IBM Research community. The choices we’ve made with this design give us the flexibility to scale up at will and readily deploy similar infrastructure into any IBM Cloud data center across the globe. Vela is now our go-to environment for IBM Researchers creating our most advanced AI capabilities, including our work on foundation models and is where we collaborate with partners to train models of many kinds.Why build an AI supercomputer in the cloud?IBM has deep roots in the world of supercomputing, having designed generations of top-performing systems ranked in the world’s top 500 lists. This includes Summit and Sierra, some of the most powerful supercomputers in the world today. With each system we design, we discover new ways to improve performance, resiliency, and cost for workloads of interest, increase researcher productivity, and better align with the needs of our customers and partners. Last year, we set out with the goal of compressing the time to build and deploy world-class AI models to the greatest extent possible. This seemingly simple goal kicked off a healthy internal debate: Do we build our system on-premises, using the traditional supercomputing model, or do we build this system into the cloud, in essence building a supercomputer that is also a cloud. In the latter model, we might compromise a bit on performance, but we would gain considerably in productivity. In the cloud, we configure all the resources we need through software, use a robust and established API interface, and gain access to a broader ecosystem of services to integrate with. We can leverage data sets residing on IBM’s Cloud Object Store instead of building our own storage back end. We can leverage IBM Cloud’s VPC capability to collaborate with partners using advanced security practices. The list of potential advantages for our productivity went on and on. As the debate unfolded, it became clear that we needed to build a cloud-native AI supercomputer. Here’s how we did it.Key design choices and opportunities for innovationWhen it comes to AI-centric infrastructure, one intransigent requirement is the need for nodes with many GPUs, or AI accelerators. To configure those nodes, we had two choices: either make each node provisionable as bare metal, or enable configuration of the node as a virtual machine (VM).1 It’s generally accepted that bare metal is the path to maximizing AI performance, but VMs provide more flexibility. Going the VM route would enable our service teams to provision and re-provision the infrastructure with different software stacks required by different AI users. We knew, for example, that when this system came online some of our researchers were using HPC software and schedulers, like Spectrum LSF. We also knew that many researchers had migrated to our cloud-native software stack, based on OpenShift. VMs would make it easy for our support team to flexibly scale AI clusters dynamically and shift resources between workloads of various kinds in a matter of minutes. A comparison of the traditional HPC software stack and Cloud-native AI stack are shown in Figure 1. But the downside of cloud native stack with virtualization, historically, is that it reduces node performance.So, we asked ourselves: how do we deliver bare-metal performance inside of a VM? Following a significant amount of research and discovery, we devised a way to expose all of the capabilities on the node (GPUs, CPUs, networking, and storage) into the VM so that the virtualization overhead is less than 5%, which is the lowest overhead in the industry that we’re aware of. This work includes configuring the bare-metal host for virtualization with support for Virtual Machine Extensions (VMX), single-root IO virtualization (SR-IOV), and huge pages. We also needed to faithfully represent all devices and their connectivity inside the VM, such as which network cards are connected to which CPUs and GPUs, how GPUs are connected to the CPU sockets, and how GPUs are connected to each other. These, along with other hardware and software configurations, enabled our system to achieve close to bare metal performance.A comparison of the HPC AI system stack and Cloud-native AI system stack.A second important choice was on the design of the AI node. Given the desire to use Vela to train large models, we opted for large GPU memory (80 GB), and a significant amount of memory and local storage on the node (1.5TB of DRAM, and four 3.2TB NVMe drives). We anticipated that large memory and storage configurations would be important for caching AI training data, models, other related artifacts, and feeding the GPUs with data to keep them busy.A third important dimension affecting the system’s performance is its network design. Given our desire to operate Vela as part of a cloud, building a separate Infiniband-like network — just for this system — would defeat the purpose of the exercise. We needed to stick to standard ethernet-based networking that typically gets deployed in a cloud. But traditional supercomputing wisdom states that you need a highly specialized network. The question therefore became: what do we need to do to prevent our standard, ethernet-based network from becoming a significant bottleneck?We got started by simply enabling SR-IOV for our network interface cards on each node, thereby exposing each 100G link directly into the VMs via virtual functions. In doing so, we also were able to use all of IBM Cloud’s VPC network capabilities such as security groups, network access control lists, custom routes, private access to PaaS services of IBM Cloud, access to Direct Link and Transit Gateway services.The results we recently published with PyTorch showed that by optimizing the workload communication patterns, controllable at the PyTorch level, we can hide the communication time over the network behind compute time occurring on the GPUs. This approach is aided by our choice of GPUs with 80GB of memory (discussed above), which allows us to use bigger batch sizes (compared to the 40 GB model), and leverage the Fully Shared Data Parallel (FSDP) training strategy more efficiently. In this way, we can efficiently use our GPUs in distributed training runs with efficiencies of up to 90% and beyond for models with 10+ billion parameters. Next we’ll be rolling out an implementation of remote direct memory access (RDMA) over converged ethernet (RoCE) at scale and GPU Direct RDMA (GDR), to deliver the performance benefits of RDMA and GDR while minimizing adverse impact to other traffic. Our lab measurements indicate that this will cut latency in half.Vela’s architecture detailsEach of Vela’s nodes has eight 80GB A100 GPUs, which are connected to each other by NVLink and NVSwitch. In addition, each node has two 2nd Generation Intel Xeon Scalable processors (Cascade Lake), 1.5TB of DRAM, and four 3.2TB NVMe drives. To support distributed training, the compute nodes are connected via multiple 100G network interfaces that are connected in a two-level Clos structure with no oversubscription. To support high availability, we built redundancy into the system: Each port of the network interface card (NIC) is connected to a different top-of-rack (TOR) switch, and each TOR switch is connected via two 100G links to four spine switches providing 1.6TB cross rack bandwidth and ensures that the system can continue to operate despite failures of any given NIC, TOR, or spine switch. Multiple microbenchmarks including iperf and NVIDIA Collective Communication Library (NCCL), show that the applications can drive close to the line rate for node-to-node TCP communication.While this work was done with an eye towards delivering performance and flexibility for large-scale AI workloads, the infrastructure was designed to be deployable in any of our worldwide data centers at any scale. It is also natively integrated into IBM Cloud’s VPC environment, meaning that the AI workloads can use any of the more than 200 IBM Cloud services currently available. While the work was done in the context of a public cloud, the architecture could also be adopted for on-premises AI system design.Why Vela matters and what’s nextHaving the right tools and infrastructure is a critical ingredient for R&D productivity. Many teams choose to follow the “tried and true” path of building traditional supercomputers for AI. While there is clearly nothing wrong with this approach, we’ve been working on a better solution that provides the dual benefits of high-performance computing and high end-user productivity, enabled by a hybrid cloud development experience. 2 Vela has been online since May 2022 and is in productive use by dozens of AI researchers at IBM Research, who are training models with tens of billions of parameters. We’re looking forward to sharing more about upcoming improvements to both end-user productivity and performance, enabled by emerging systems and software innovations.2 We are also excited about the opportunities that will be enabled by our AI-optimized processor, the IBM AIU, and will be sharing more about this in future communications. The era of cloud-native AI supercomputing has only just begun. If you are considering building an AI system or want to know more, please contact us.
Google Claims Breakthrough in Quantum Computer Error CorrectionPosted by msmash on Wednesday February 22, 2023 @12:20PM from the moving-forward dept.Google has claimed a breakthrough in correcting for the errors that are inherent in today's quantum computers, marking an early but potentially significant step in overcoming the biggest technical barrier to a revolutionary new form of computing. From a report:CitarThe internet company's findings, which have been published in the journal Nature, mark a "milestone on our journey to build a useful quantum computer," said Hartmut Neven, head of Google's quantum efforts. He called error correction "a necessary rite of passage that any quantum computing technology has to go through."Quantum computers struggle to produce useful results because the quantum bits, or qubits, they are based on only hold their quantum states for a tiny fraction of a second. That means information encoded in a quantum system is lost before the machine can complete its calculations. Finding a way to correct for the errors this causes is the hardest technical challenge the industry faces. [...] Google's researchers said they had found a way to spread the information being processed in a quantum computer across a number of qubits in a way that meant the system as a whole could retain enough to complete a calculation, even as individual qubits fell out of their quantum states. The research published in Nature pointed to a reduction of only 4 per cent in the error rate as Google scaled up its technique to run on a larger quantum system. However, the researchers said this was the first time that increasing the size of the computer had not also led to a rise in the error rate.
The internet company's findings, which have been published in the journal Nature, mark a "milestone on our journey to build a useful quantum computer," said Hartmut Neven, head of Google's quantum efforts. He called error correction "a necessary rite of passage that any quantum computing technology has to go through."Quantum computers struggle to produce useful results because the quantum bits, or qubits, they are based on only hold their quantum states for a tiny fraction of a second. That means information encoded in a quantum system is lost before the machine can complete its calculations. Finding a way to correct for the errors this causes is the hardest technical challenge the industry faces. [...] Google's researchers said they had found a way to spread the information being processed in a quantum computer across a number of qubits in a way that meant the system as a whole could retain enough to complete a calculation, even as individual qubits fell out of their quantum states. The research published in Nature pointed to a reduction of only 4 per cent in the error rate as Google scaled up its technique to run on a larger quantum system. However, the researchers said this was the first time that increasing the size of the computer had not also led to a rise in the error rate.
Google Researchers Unveil ChatGPT-Style AI Model To Guide a Robot Without Special TrainingPosted by BeauHD on Tuesday March 07, 2023 @10:30PM from the largest-visual-language-model-ever dept.An anonymous reader quotes a report from Ars Technica:CitarOn Monday, a group of AI researchers from Google and the Technical University of Berlin unveiled PaLM-E, a multimodal embodied visual-language model (VLM) with 562 billion parameters that integrates vision and language for robotic control. They claim it is the largest VLM ever developed and that it can perform a variety of tasks without the need for retraining. According to Google, when given a high-level command, such as "bring me the rice chips from the drawer," PaLM-E can generate a plan of action for a mobile robot platform with an arm (developed by Google Robotics) and execute the actions by itself.PaLM-E does this by analyzing data from the robot's camera without needing a pre-processed scene representation. This eliminates the need for a human to pre-process or annotate the data and allows for more autonomous robotic control. It's also resilient and can react to its environment. For example, the PaLM-E model can guide a robot to get a chip bag from a kitchen -- and with PaLM-E integrated into the control loop, it becomes resistant to interruptions that might occur during the task. In a video example, a researcher grabs the chips from the robot and moves them, but the robot locates the chips and grabs them again. In another example, the same PaLM-E model autonomously controls a robot through tasks with complex sequences that previously required human guidance. Google's research paper explains (PDF) how PaLM-E turns instructions into actions.PaLM-E is a next-token predictor, and it's called "PaLM-E" because it's based on Google's existing large language model (LLM) called "PaLM" (which is similar to the technology behind ChatGPT). Google has made PaLM "embodied" by adding sensory information and robotic control. Since it's based on a language model, PaLM-E takes continuous observations, like images or sensor data, and encodes them into a sequence of vectors that are the same size as language tokens. This allows the model to "understand" the sensory information in the same way it processes language. In addition to the RT-1 robotics transformer, PaLM-E draws from Google's previous work on ViT-22B, a vision transformer model revealed in February. ViT-22B has been trained on various visual tasks, such as image classification, object detection, semantic segmentation, and image captioning.
On Monday, a group of AI researchers from Google and the Technical University of Berlin unveiled PaLM-E, a multimodal embodied visual-language model (VLM) with 562 billion parameters that integrates vision and language for robotic control. They claim it is the largest VLM ever developed and that it can perform a variety of tasks without the need for retraining. According to Google, when given a high-level command, such as "bring me the rice chips from the drawer," PaLM-E can generate a plan of action for a mobile robot platform with an arm (developed by Google Robotics) and execute the actions by itself.PaLM-E does this by analyzing data from the robot's camera without needing a pre-processed scene representation. This eliminates the need for a human to pre-process or annotate the data and allows for more autonomous robotic control. It's also resilient and can react to its environment. For example, the PaLM-E model can guide a robot to get a chip bag from a kitchen -- and with PaLM-E integrated into the control loop, it becomes resistant to interruptions that might occur during the task. In a video example, a researcher grabs the chips from the robot and moves them, but the robot locates the chips and grabs them again. In another example, the same PaLM-E model autonomously controls a robot through tasks with complex sequences that previously required human guidance. Google's research paper explains (PDF) how PaLM-E turns instructions into actions.PaLM-E is a next-token predictor, and it's called "PaLM-E" because it's based on Google's existing large language model (LLM) called "PaLM" (which is similar to the technology behind ChatGPT). Google has made PaLM "embodied" by adding sensory information and robotic control. Since it's based on a language model, PaLM-E takes continuous observations, like images or sensor data, and encodes them into a sequence of vectors that are the same size as language tokens. This allows the model to "understand" the sensory information in the same way it processes language. In addition to the RT-1 robotics transformer, PaLM-E draws from Google's previous work on ViT-22B, a vision transformer model revealed in February. ViT-22B has been trained on various visual tasks, such as image classification, object detection, semantic segmentation, and image captioning.
Scientists Managed To Completely Map a Baby Fruit Fly's BrainPosted by BeauHD on Friday March 10, 2023 @10:30PM from the best-to-start-small dept.An anonymous reader quotes a report from Popular Mechanics:CitarScientists from the University of Cambridge and Johns Hopkins University announced that they'd finally mapped every single neuron and all the connections between them housed inside the brain of a fruit fly larva. The team's research was published this week in the journal Science. "If we want to understand who we are and how we think, part of that is understanding the mechanism of thought," says Johns Hopkins biomedical engineer Joshua T. Vogelstein in a press release. "And the key to that is knowing how neurons connect with each other."And there are a lot of neurons and connections to sort through. To complete this neurological map, scientists had to identify 3,016 neurons. But that pales in comparison to the number of connections between these neurons, which comes to a grand total of 548,000. They also identified 93 distinct neurons that differed in shape, function, and neurological connection. If this all sounds difficult, that's because it is. For 12 years, scientists had to painstakingly slice a brain into thousands of tissue samples, image them with an high-resolution electron microscope, and then piece them back together -- neuron by neuron.Understanding the inner workings of a fruit fly's brain may seem unrelated to the human mind, but scientists didn't choose this particular species based on its size or perceived simplicity -- rather, fruit flies actually share fundamental biology and a comparable genetic foundation with humans. This makes the map a perfect cornerstone upon which to explore some of the many mysteries of the human mind. "All brains are similar -- they are all networks of interconnected neurons," Marta Zaltic, a co-author on the study, told the BBC. "All brains of all species have to perform many complex behaviors: they all need to process sensory information, learn, select actions, navigate their environments, choose food, etc."
Scientists from the University of Cambridge and Johns Hopkins University announced that they'd finally mapped every single neuron and all the connections between them housed inside the brain of a fruit fly larva. The team's research was published this week in the journal Science. "If we want to understand who we are and how we think, part of that is understanding the mechanism of thought," says Johns Hopkins biomedical engineer Joshua T. Vogelstein in a press release. "And the key to that is knowing how neurons connect with each other."And there are a lot of neurons and connections to sort through. To complete this neurological map, scientists had to identify 3,016 neurons. But that pales in comparison to the number of connections between these neurons, which comes to a grand total of 548,000. They also identified 93 distinct neurons that differed in shape, function, and neurological connection. If this all sounds difficult, that's because it is. For 12 years, scientists had to painstakingly slice a brain into thousands of tissue samples, image them with an high-resolution electron microscope, and then piece them back together -- neuron by neuron.Understanding the inner workings of a fruit fly's brain may seem unrelated to the human mind, but scientists didn't choose this particular species based on its size or perceived simplicity -- rather, fruit flies actually share fundamental biology and a comparable genetic foundation with humans. This makes the map a perfect cornerstone upon which to explore some of the many mysteries of the human mind. "All brains are similar -- they are all networks of interconnected neurons," Marta Zaltic, a co-author on the study, told the BBC. "All brains of all species have to perform many complex behaviors: they all need to process sensory information, learn, select actions, navigate their environments, choose food, etc."
El primer mapa de un cerebro, de una larva de mosca, acerca el sueño de entender la mente humanaUn equipo liderado por el español Albert Cardona logra la hazaña científica de completar la arquitectura cerebral de un animal, neurona a neuronaMANUEL ANSEDE | 09 MAR 2023Mapa del cerebro de la larva de la mosca de la fruta, con 3.016 neuronas y más de medio millón de conexiones.UNIVERSIDADES DE CAMBRIDGE Y JOHNS HOPKINSLa humanidad solo había sido capaz de mapear célula a célula tres minúsculos sistemas nerviosos con unos pocos cientos de neuronas: el del gusano de laboratorio Caenorhabditis elegans, el de la larva del invertebrado marino Platynereis dumerilii y el de un diminuto animal que vive pegado a las rocas oceánicas, Ciona intestinalis. Un equipo encabezado por el biólogo español Albert Cardona y su colega croata Marta Zlatic han firmado ahora una hazaña científica: el mapa del cerebro completo de la larva de la mosca de la fruta, una estructura con 3.016 neuronas y 548.000 conexiones entre ellas. “Hemos multiplicado por 10 lo que se había conseguido hasta ahora”, celebra Cardona, del legendario Laboratorio de Biología Molecular de Cambridge (Reino Unido), cuyos científicos han ganado una docena de premios Nobel.El biólogo, nacido en Tarragona hace 44 años, explica la magnitud del avance. “Imaginemos que el metro de una ciudad tuviera 3.000 estaciones y cada una de ellas estuviera comunicada con otras 200″, ilustra. La complejidad del cerebro de la larva de la mosca de la fruta, sin embargo, palidece ante la estructura más sofisticada sobre la faz de la Tierra: el cerebro humano, un órgano de kilo y medio con 86.000 millones de neuronas. “Tres millares de neuronas parecen muy pocas, pero esta larva es capaz de navegar por gradientes de luz o de olores, puede encontrar comida por sí misma, tiene memoria a corto y a largo plazo. Es un animal muy autosuficiente”, apunta Cardona. Su proeza se publica este jueves en la revista Science, escaparate de la mejor ciencia mundial.La persona que empezó a hacer un mapa del cerebro humano fue el español Santiago Ramón y Cajal, en 1888. Con un rudimentario microscopio en su laboratorio de Barcelona, el investigador demostró que el órgano de la mente no era una masa difusa, como se pensaba hasta entonces, sino que estaba organizado en células individuales: las neuronas. Cajal comenzó entonces una tarea titánica, dibujando a mano con maestría cada estructura cerebral, célula a célula, con sus conexiones, a las que él llamaba poéticamente “besos”.El biólogo español Albert Cardona, en su despacho del Laboratorio de Biología Molecular de Cambridge.El equipo de Albert Cardona ha empleado métodos más sofisticados. Hace una docena de años, los científicos extrajeron con unas pinzas el sistema nervioso de una larva de mosca de la fruta. Lo cortaron en unas 5.000 lonchas ultrafinas y las observaron con un microscopio electrónico. El biólogo ideó un software que permite unir con precisión esas imágenes —igual que un teléfono móvil junta varias fotos en una única panorámica— y navegar por ese volumen tridimensional, como si fuera Google Maps.Las aplicaciones de un mapa cerebral son inimaginables. Cardona cita el trabajo de un compañero de su laboratorio, el neurobiólogo Pedro Gómez Gálvez, uno de los científicos españoles que en 2018 anunciaron el descubrimiento de unas nuevas formas geométricas: los escutoides, una especie de prismas retorcidos observados por primera vez en las glándulas salivales de las moscas de la fruta. Gómez Gálvez está comparando cerebros completos de larvas normales con los de otras larvas modificadas genéticamente para imitar los síntomas del párkinson. Otros trastornos poco comprendidos —como el autismo, la esquizofrenia y la epilepsia— surgen por una desviación del desarrollo cerebral típico.El físico estadounidense Emerson Pugh dejó una frase para la historia antes de morir en 1981: “Si el cerebro humano fuera tan simple que pudiéramos entenderlo, nosotros seríamos tan simples que no lo entenderíamos”. Es la paradoja del cerebro, una estructura tan sofisticada que es incapaz de imaginarse a sí misma. Cardona, sin embargo, es optimista. Cree que obtener un mapa del cerebro humano con sus conexiones neurona a neurona —el llamado conectoma— es solo cuestión de tiempo. “El cerebro del ratón se va a hacer en los próximos 10 o 15 años. La pregunta es cuánto va a costar. Hay varios proyectos proponiéndolo, pero estamos hablando de entre 500 y 1.000 millones de dólares solo para hacer el trabajo preliminar”, calcula el biólogo. “Y el cerebro humano requerirá una cantidad absurda de recursos”, vaticina.Un adulto de mosca de la fruta, una pupa y una larva del mismo insecto.WEIGMANN ET AL.Cardona explica que su colega Gregory Jefferis ya está mapeando en Cambridge el cerebro de la mosca adulta. Los resultados se esperan a partir del año que viene. Otro objetivo obvio sería la abeja de la miel, con un millón de neuronas. “Tenemos que entrar en el cerebro de la abeja, porque posee la capacidad del lenguaje y la de recordar sitios concretos a lo largo de kilómetros de paisajes. ¿Cómo lo hace? ¿Cómo explica a otra abeja cómo ir a un lugar? Todo eso se puede estudiar si se conoce su cableado neuronal”, afirma Cardona.El neurocientífico Rafael Yuste, catedrático de la Universidad de Columbia (EE UU), considera “espectacular” el mapa del cerebro de la larva. Este investigador estaba en 1985 en el laboratorio del biólogo sudafricano Sydney Brenner, también en Cambridge, cuando este equipo hizo un primer intento de mapear las 302 neuronas del Caenorhabditis elegans. Aquel estudio llevaba un título provocador: La mente del gusano. Yuste recuerda que aquel trabajo pionero fue muy artesanal, “casi heroico”, mientras que ahora es un proceso prácticamente industrial. A su juicio, el progreso hacia cerebros más complejos es “inexorable”.Yuste es uno de los impulsores del futuro Centro Nacional de Neurotecnología Spain Neurotech, en Madrid. “Es muy difícil realizar estos estudios, se necesitan equipos enormes, con mucha inversión de tiempo y trabajo. Por eso es importante coordinar la financiación y los esfuerzos a nivel nacional e internacional en neurotecnología. Se está estudiando el mapear el conectoma del ratón en una colaboración a nivel mundial”, argumenta el catedrático. El cerebro de un ratón es un millón de veces más grande que el de la larva de una mosca.Albert Cardona cuenta que se han encontrado con una “sorpresa” en el cerebro de la larva. Su arquitectura se parece mucho a las de las modernas redes neuronales artificiales, como ResNet, DenseNet y U-Net, empleadas en sofisticados programas informáticos de aprendizaje automático. “En las redes neuronales tradicionales, cada capa de neuronas solamente se conecta con la siguiente. El quid de la cuestión son las conexiones que se saltan capas. Ahí está la raíz de sus capacidades excepcionales”, explica Cardona.El biólogo español considera “alucinante” lo que son capaces de hacer estos seres con solo 3.000 neuronas. Cardona destaca que las larvas de la mosca de la fruta, como otros muchos insectos, suelen tener en su interior una avispa parásita, como el monstruo de la película Alien. “La larva de la mosca lo detecta y va a comer alimento enriquecido en alcohol, fruta fermentada, para medicarse, porque ese alcohol mata el parásito que tiene dentro”, relata el investigador.Cardona subraya la complejidad organizativa de esas 3.000 neuronas. Además del salto de capas, hay conexiones en bucle, de manera similar a las redes neuronales artificiales LSTM, que se utilizan en miles de millones de computadoras a diario. El biólogo confía en que el cerebro de la larva de la mosca dará lugar a nuevos sistemas de inteligencia artificial, con un aprendizaje automático más poderoso que los actuales. “Ya hay informáticos inspirándose en los circuitos cerebrales de nuestra larva”, aplaude. A largo plazo, el objetivo es mucho más ambicioso, según ha proclamado otro de los coautores del mapa, el ingeniero biomédico Joshua Vogelstein, de la Universidad Johns Hopkins (EE UU): “Entender quiénes somos y cómo pensamos”.
Aprobado el estatuto de la Agencia Espacial Española, lo que le pone fecha de entrada en funcionamientoPOR @WICHO — 9 DE MARZO DE 2023Intasat, el primer satélite artificial español, lanzado en noviembre de 1974El Consejo de Ministros del pasado día 7 aprobó el Estatuto de la Agencia Espacial Española. Publicado en el BOE del 8 de marzo, entraba en vigor el día 9 de ese mismo mes. Según la referencia del Consejo de Ministros del día «Este organismo público, adscrito al Ministerio de Ciencia e Innovación y el Ministerio de Defensa, servirá para coordinar las actividades en torno al ámbito espacial, tanto desde el punto de vista de su desarrollo tecnológico como del uso del espacio en ámbitos como la seguridad, la observación de la tierra, la geolocalización o las comunicaciones.»Sus 43 artículos –lástima que no hayan sido 42– establecen sus objetivos y fines, competencias, estructura, forma de funcionamiento, el régimen del personal, su financiación, rendimiento de cuentas, etc.Pero sobre todo el Real Decreto en el que se publica el estatuto establece que a entrada en funcionamiento de la Agencia Espacial Española se producirá con la celebración de la sesión constitutiva de su Consejo Rector, que tendrá lugar en el plazo máximo de tres meses desde la entrada en vigor de este real decreto.Así que si se cumplen los plazos como muy tarde a principios de junio la AEE tiene que estar en funcionamiento. Y estaría feo empezar no cumpliendo los plazos. Aunque es cierto que la Ley 17/2022, de 5 de septiembre, que daba habilitación legal para la creación de la Agencia, emplazaba al Gobierno a aprobar su estatuto antes del 7 de septiembre de 2023. Así que por ahora vamos bien. Claro que ya nos la habían prometido en 2015.En su primer año de funcionamiento la AEE contará con un presupuesto de 700 millones de euros y contratará a 75 personas. Su sede estará en Sevilla.