Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 1 Visitante están viendo este tema.
You Can Now Run a GPT-3 Level AI Model On Your Laptop, Phone, and Raspberry PiPosted by BeauHD on Tuesday March 14, 2023 @09:00AM from the what-will-they-think-of-next dept.An anonymous reader quotes a report from Ars Technica:CitarOn Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it. [...]Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU. After obtaining the LLaMA weights ourselves, we followed [independent AI researcher Simon Willison's] instructions and got the 7B parameter version running on an M1 Macbook Air, and it runs at a reasonable rate of speed. You call it as a script on the command line with a prompt, and LLaMA does its best to complete it in a reasonable way.There's still the question of how much the quantization affects the quality of the output. In our tests, LLaMA 7B trimmed down to 4-bit quantization was very impressive for running on a MacBook Air -- but still not on par with what you might expect from ChatGPT. It's entirely possible that better prompting techniques might generate better results. Also, optimizations and fine-tunings come quickly when everyone has their hands on the code and the weights -- even though LLaMA is still saddled with some fairly restrictive terms of use. The release of Alpaca today by Stanford proves that fine tuning (additional training with a specific goal in mind) can improve performance, and it's still early days after LLaMA's release.A step-by-step instruction guide for running LLaMA on a Mac can be found here (Warning: it's fairly technical).
On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it. [...]Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU. After obtaining the LLaMA weights ourselves, we followed [independent AI researcher Simon Willison's] instructions and got the 7B parameter version running on an M1 Macbook Air, and it runs at a reasonable rate of speed. You call it as a script on the command line with a prompt, and LLaMA does its best to complete it in a reasonable way.There's still the question of how much the quantization affects the quality of the output. In our tests, LLaMA 7B trimmed down to 4-bit quantization was very impressive for running on a MacBook Air -- but still not on par with what you might expect from ChatGPT. It's entirely possible that better prompting techniques might generate better results. Also, optimizations and fine-tunings come quickly when everyone has their hands on the code and the weights -- even though LLaMA is still saddled with some fairly restrictive terms of use. The release of Alpaca today by Stanford proves that fine tuning (additional training with a specific goal in mind) can improve performance, and it's still early days after LLaMA's release.
Meta AI Unlocks Hundreds of Millions of Proteins To Aid Drug DiscoveryPosted by msmash on Thursday March 16, 2023 @05:25PM from the moving-forward dept.Facebook parent company Meta Platforms has created a tool to predict the structure of hundreds of millions of proteins using artificial intelligence. Researchers say it promises to deepen scientists' understanding of biology, and perhaps speed the discovery of new drugs. From a report:CitarMeta's research arm, Meta AI, used the new AI-based computer program known as ESMFold to create a public database of 617 million predicted proteins. Proteins are the building blocks of life and of many medicines, required for the function of tissues, organs and cells. Drugs based on proteins are used to treat heart disease, certain cancers and HIV, among other illnesses, and many pharmaceutical companies have begun to pursue new drugs with artificial intelligence. Using AI to predict protein structures is expected to not only boost the effectiveness of existing drugs and drug candidates but also help discover molecules that could treat diseases whose cures have remained elusive.With ESMFold, Meta is squaring off against another protein-prediction computer model known as AlphaFold from DeepMind Technologies, a subsidiary of Google parent Alphabet. AlphaFold said last year that its database has 214 million predicted proteins that could help accelerate drug discovery. Meta says ESMFold is 60 times faster than AlphaFold, but less accurate. The ESMFold database is larger because it made predictions from genetic sequences that hadn't been studied previously. Predicting a protein's structure can help scientists understand its biological function, according to Alexander Rives, co-author of a study published Thursday in the journal Science and a research scientist at Meta AI. Meta had previously released the paper describing ESMFold in November 2022 on a preprint server.Further reading: What metaverse? Meta says its single largest investment is now in 'advancing AI.'
Meta's research arm, Meta AI, used the new AI-based computer program known as ESMFold to create a public database of 617 million predicted proteins. Proteins are the building blocks of life and of many medicines, required for the function of tissues, organs and cells. Drugs based on proteins are used to treat heart disease, certain cancers and HIV, among other illnesses, and many pharmaceutical companies have begun to pursue new drugs with artificial intelligence. Using AI to predict protein structures is expected to not only boost the effectiveness of existing drugs and drug candidates but also help discover molecules that could treat diseases whose cures have remained elusive.With ESMFold, Meta is squaring off against another protein-prediction computer model known as AlphaFold from DeepMind Technologies, a subsidiary of Google parent Alphabet. AlphaFold said last year that its database has 214 million predicted proteins that could help accelerate drug discovery. Meta says ESMFold is 60 times faster than AlphaFold, but less accurate. The ESMFold database is larger because it made predictions from genetic sequences that hadn't been studied previously. Predicting a protein's structure can help scientists understand its biological function, according to Alexander Rives, co-author of a study published Thursday in the journal Science and a research scientist at Meta AI. Meta had previously released the paper describing ESMFold in November 2022 on a preprint server.
A Trillionth-of-a-Second Shutter Speed Camera Catches Chaos in ActionPosted by EditorDavid on Sunday March 19, 2023 @01:34PM from the speedy-shutter dept.Long-time Slashdot reader turp182 shares two stories about the new state-of-the-art in very-high-speed imaging. "The techniques don't image captured photons, but instead 'touch' the target to perform imaging/read structures using either lasers or neutrons."First, Science Daily reports that physicists from the University of Gothenburg (with colleagues from the U.S. and Germany) have developed an ultrafast laser camera that can create videos at 12.5 billion images per second, "which is at least a thousand times faster than today's best laser equipment."Citar[R]esearchers use a laser camera that photographs the material in [an ultrathin, one-atom-thick] two-dimensional layer.... By observing the sample from the side, it is possible to see what reactions and emissions occur over time and space. Researchers have used single-shot laser sheet compressed ultrafast photography to study the combustion of various hydrocarbons.... This has enabled researchers to illustrate combustion with a time resolution that has never been achieved before. "The more pictures taken, the more precisely we can follow the course of events...." says Yogeshwar Nath Mishra, who was one of the researchers at the University of Gothenburg and who is now presenting the results in a scientific article in the journal Light: Science & Applications.... The new laser camera takes a unique picture with a single laser pulse.Meanwhile, ScienceAlert reports on a camera with a trillionth-of-a-second shutter speed — that is, 250 million times faster than digital cameras — that's actually able to photograph atomic activity, including "dynamic disorder."CitarSimply put, dynamic disorder is when clusters of atoms move and dance around in a material in specific ways over a certain period — triggered by a vibration or a temperature change, for example. It's not a phenomenon that we fully understand yet, but it's crucial to the properties and reactions of materials. The new super-speedy shutter speed system gives us much more insight into what's happening....The researchers are referring to their invention as variable shutter atomic pair distribution function, or vsPDF for short.... To achieve its astonishingly quick snap, vsPDF uses neutrons to measure the position of atoms, rather than conventional photography techniques. The way that neutrons hit and pass through a material can be tracked to measure the surrounding atoms, with changes in energy levels the equivalent of shutter speed adjustments.
[R]esearchers use a laser camera that photographs the material in [an ultrathin, one-atom-thick] two-dimensional layer.... By observing the sample from the side, it is possible to see what reactions and emissions occur over time and space. Researchers have used single-shot laser sheet compressed ultrafast photography to study the combustion of various hydrocarbons.... This has enabled researchers to illustrate combustion with a time resolution that has never been achieved before. "The more pictures taken, the more precisely we can follow the course of events...." says Yogeshwar Nath Mishra, who was one of the researchers at the University of Gothenburg and who is now presenting the results in a scientific article in the journal Light: Science & Applications.... The new laser camera takes a unique picture with a single laser pulse.
Simply put, dynamic disorder is when clusters of atoms move and dance around in a material in specific ways over a certain period — triggered by a vibration or a temperature change, for example. It's not a phenomenon that we fully understand yet, but it's crucial to the properties and reactions of materials. The new super-speedy shutter speed system gives us much more insight into what's happening....The researchers are referring to their invention as variable shutter atomic pair distribution function, or vsPDF for short.... To achieve its astonishingly quick snap, vsPDF uses neutrons to measure the position of atoms, rather than conventional photography techniques. The way that neutrons hit and pass through a material can be tracked to measure the surrounding atoms, with changes in energy levels the equivalent of shutter speed adjustments.