Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 4 Visitantes están viendo este tema.
Peter Shor Says:Comment #3 February 24th, 2007 at 6:36 amGreat article, Scott! That’s the best job of explaining quantum computing to the man on the street that I’ve seen.
Atención.Pregunta tonta del día:¿No se podría, igualmente, encriptar cuánticamente?¡No se rían, cab*****!
Si nunca han oído hablar de la transformada cuántica de Fourier seguro que les gustará Saludos.
La facilidad de mantenimiento está en sus alas. La O2 mantiene su cuerpo principal en la superficie, sumergiendo dos alas con dos grandes turbinas en un ángulo de 45 grados. Esta operación coloca las turbinas a la profundidad adecuada para aprovechar los movimientos de la marea. Si hay algún problema mecánico, las alas vuelven a ponerse en posición horizontal, donde pueden ser fácilmente reparadas por los mecánicos. Y si el problema no puede ser reparado en el mar, la estructura puede ser arrastrada a puerto para repararla allí.Pronto podremos comprobar si están en el camino correcto o no: Orbital terminó la primera O2 a principios de mes y ahora se está preparando para colocarla en el mar y comenzar operaciones.
Me falta eso de "superposición" cuántica, que sigo sin entender. (Gracias a los que aporten luces)
Cita de: saturno en Abril 26, 2021, 22:52:57 pmMe falta eso de "superposición" cuántica, que sigo sin entender. (Gracias a los que aporten luces)Todo lo que un Qubit puede Enseñarte sobre Física CuánticaSaludos.
No, lo que quería intuir está aquí. (me lo propuso YouTube, supongo que me ha calado )Así cambiará el mundo la computación cuántica: Ignacio Cirac (2018)https://www.youtube.com/watch?v=WJ3r6btgzBM
High-Bandwidth Wireless BCI Demonstrated In Humans For First TimePosted by BeauHD on Wednesday April 28, 2021 @11:30PM from the huge-step-forward dept.An anonymous reader quotes a report from Ars Technica:CitarComing on the heels of the Neuralink announcement earlier this month -- complete with video showing a monkey playing Pong with its mind, thanks to a wireless brain implant -- researchers with the BrainGate Consortium have successfully demonstrated a high-bandwidth wireless brain-computer interface (BCI) in two tetraplegic human subjects. The researchers described their work in a recent paper published in the journal IEEE Transactions in Biomedical Engineering. As for the latest Neuralink breakthrough, Ars Science Editor John Timmer wrote last week that most of the individual pieces of Neuralink's feat have been done before -- in some cases, a decade before (BrainGate is among those earlier pioneers). But the company has taken two important steps toward its realization of a commercial BCI: miniaturizing the device and getting it to communicate wirelessly, which is harder than it sounds.According to [John Simeral of Brown University, a member of the BrainGate consortium and lead author of the new paper], the BrainGate wireless system makes the opposite tradeoff -- higher bandwidth and fidelity -- because it wants all the finer details of the data for its ongoing research. In that regard, it complements the Utrecht and Neuralink systems in the BCI space. The new BrainGate system is based on the so-called Brown Wireless Device (BWD) designed by Arto Nurmikko, and it replaces the cables with a small transmitter that weighs about 1.5 ounces. The transmitter sits atop the user's head and connects wirelessly to an implant electrode array inside the motor cortex.There were two participants in the clinical trial -- a 35-year-old man and a 65-year-old man -- both of whom were paralyzed by spinal cord injuries. They were able to continuously use the BCI for a full 24 hours, even as they slept, yielding continuous data over that time period. (The medical-grade battery lasts for 36 hours.) "We can learn more about the neural signals that way because we can record over long periods of time," said Simeral. "And we can also begin to learn a little bit about how people actually will use the system, given the freedom to do so." His team was encouraged by the fact that one of its study participants often asked if they could leave the wireless transmitters on a little longer. He has a head tracker he can use as a fallback, but several nights a week, he would choose to use the wireless BrainGate system because he liked it."Right now, we typically decode or interpret the spiking activity from networks of neurons," said Simeral. "There are other encoding mechanisms that have been studied in the brain that have to do with how the oscillations in the brain are related to these spiking signals. There's information in the different oscillation frequencies that might relate to, for example, sleep state, attention state, other phenomenon that we care about. Without a continuous recording, you've surrendered the ability to learn about any of those. Learning how this all happens in the human brain in the home as people are behaving and having different thoughts requires having a broadband system recording from the human brain.""The ability to potentially have individuals with disability using these systems at home on demand, I think is a great step forward," said Simeral. "More broadly, going forward, having more players in the field, having more funding, is important. I see nothing but great things from all of these interactions. For our own work, we see things on the horizon that were impossible five years ago, when there was essentially nobody in the corporate world interested in this space. So I think it's a very promising time."
Coming on the heels of the Neuralink announcement earlier this month -- complete with video showing a monkey playing Pong with its mind, thanks to a wireless brain implant -- researchers with the BrainGate Consortium have successfully demonstrated a high-bandwidth wireless brain-computer interface (BCI) in two tetraplegic human subjects. The researchers described their work in a recent paper published in the journal IEEE Transactions in Biomedical Engineering. As for the latest Neuralink breakthrough, Ars Science Editor John Timmer wrote last week that most of the individual pieces of Neuralink's feat have been done before -- in some cases, a decade before (BrainGate is among those earlier pioneers). But the company has taken two important steps toward its realization of a commercial BCI: miniaturizing the device and getting it to communicate wirelessly, which is harder than it sounds.According to [John Simeral of Brown University, a member of the BrainGate consortium and lead author of the new paper], the BrainGate wireless system makes the opposite tradeoff -- higher bandwidth and fidelity -- because it wants all the finer details of the data for its ongoing research. In that regard, it complements the Utrecht and Neuralink systems in the BCI space. The new BrainGate system is based on the so-called Brown Wireless Device (BWD) designed by Arto Nurmikko, and it replaces the cables with a small transmitter that weighs about 1.5 ounces. The transmitter sits atop the user's head and connects wirelessly to an implant electrode array inside the motor cortex.There were two participants in the clinical trial -- a 35-year-old man and a 65-year-old man -- both of whom were paralyzed by spinal cord injuries. They were able to continuously use the BCI for a full 24 hours, even as they slept, yielding continuous data over that time period. (The medical-grade battery lasts for 36 hours.) "We can learn more about the neural signals that way because we can record over long periods of time," said Simeral. "And we can also begin to learn a little bit about how people actually will use the system, given the freedom to do so." His team was encouraged by the fact that one of its study participants often asked if they could leave the wireless transmitters on a little longer. He has a head tracker he can use as a fallback, but several nights a week, he would choose to use the wireless BrainGate system because he liked it.
Growing Food With Air and Solar Power Is More Efficient Than Planting CropsPosted by BeauHD on Tuesday June 22, 2021 @11:30PM from the food-from-air dept.An anonymous reader quotes a report from Phys.Org:CitarA team of researchers from the Max Planck Institute of Molecular Plant Physiology, the University of Naples Federico II, the Weizmann Institute of Science and the Porter School of the Environment and Earth Sciences has found that making food from air would be far more efficient than growing crops. In their paper published in Proceedings of the National Academy of Sciences, the group describes their analysis and comparison of the efficiency of growing crops (soybeans) and using a food-from-air technique. [...] To make their comparisons, the researchers used a food-from-air system that uses solar energy panels to make electricity, which is combined with carbon dioxide from the air to produce food for microbes grown in a bioreactor. The protein the microbes produce is then treated to remove nucleic acids and then dried to produce a powder suitable for consumption by humans and animals.They compared the efficiency of the system with a 10-square-kilometer soybean field. Their analysis showed that growing food from air was 10 times as efficient as growing soybeans in the ground. Put another way, they suggested that a 10-square-kilometer piece of land in the Amazon used to grow soybeans could be converted to a one-square-kilometer piece of land for growing food from the air, with the other nine square kilometers turned back to wild forest growth. They also note that the protein produced using the food-from-air approach had twice the caloric value as most other crops such as corn, wheat and rice.
A team of researchers from the Max Planck Institute of Molecular Plant Physiology, the University of Naples Federico II, the Weizmann Institute of Science and the Porter School of the Environment and Earth Sciences has found that making food from air would be far more efficient than growing crops. In their paper published in Proceedings of the National Academy of Sciences, the group describes their analysis and comparison of the efficiency of growing crops (soybeans) and using a food-from-air technique. [...] To make their comparisons, the researchers used a food-from-air system that uses solar energy panels to make electricity, which is combined with carbon dioxide from the air to produce food for microbes grown in a bioreactor. The protein the microbes produce is then treated to remove nucleic acids and then dried to produce a powder suitable for consumption by humans and animals.They compared the efficiency of the system with a 10-square-kilometer soybean field. Their analysis showed that growing food from air was 10 times as efficient as growing soybeans in the ground. Put another way, they suggested that a 10-square-kilometer piece of land in the Amazon used to grow soybeans could be converted to a one-square-kilometer piece of land for growing food from the air, with the other nine square kilometers turned back to wild forest growth. They also note that the protein produced using the food-from-air approach had twice the caloric value as most other crops such as corn, wheat and rice.
Mathematicians welcome computer-assisted proof in ‘grand unification’ theoryProof-assistant software handles an abstract concept at the cutting edge of research, revealing a bigger role for software in mathematics.Davide Castelvecchi | 18 June 2021Efforts to verify a complex mathematical proof using computers have been successful.Credit: Fadel Senna/AFP via GettyPeter Scholze wants to rebuild much of modern mathematics, starting from one of its cornerstones. Now, he has received validation for a proof at the heart of his quest from an unlikely source: a computer.Although most mathematicians doubt that machines will replace the creative aspects of their profession anytime soon, some acknowledge that technology will have an increasingly important role in their research — and this particular feat could be a turning point towards its acceptance.Scholze, a number theorist, set forth the ambitious plan — which he co-created with his collaborator Dustin Clausen from the University of Copenhagen — in a series of lectures in 2019 at the University of Bonn, Germany, where he is based. The two researchers dubbed it ‘condensed mathematics’, and they say it promises to bring new insights and connections between fields ranging from geometry to number theory.Other researchers are paying attention: Scholze is considered one of mathematics’ brightest stars and has a track record of introducing revolutionary concepts. Emily Riehl, a mathematician at Johns Hopkins University in Baltimore, Maryland, says that if Scholze and Clausen’s vision is realized, the way mathematics is taught to graduate students in 50 years’ time could be very different than it is today. “There are a lot of areas of mathematics that I think in the future will be affected by his ideas,” she says.Until now, much of that vision rested on a technical proof so involved that even Scholze and Clausen couldn’t be sure it was correct. But earlier this month, Scholze announced that a project to check the heart of the proof using specialized computer software had been successful.Computer assistanceMathematicians have long used computers to do numerical calculations or manipulate complex formulas. In some cases, they have proved major results by making computers do massive amounts of repetitive work — the most famous being a proof in the 1970s that any map can be coloured with just four different colours, and without filling any two adjacent countries with the same colour.But systems known as proof assistants go deeper. The user enters statements into the system to teach it the definition of a mathematical concept — an object — based on simpler objects that the machine already knows about. A statement can also just refer to known objects, and the proof assistant will answer whether the fact is ‘obviously’ true or false based on its current knowledge. If the answer is not obvious, the user has to enter more details. Proof assistants thus force the user to lay out the logic of their arguments in a rigorous way, and they fill in simpler steps that human mathematicians had consciously or unconsciously skipped.Once researchers have done the hard work of translating a set of mathematical concepts into a proof assistant, the program generates a library of computer code that can be built on by other researchers and used to define higher-level mathematical objects. In this way, proof assistants can help to verify mathematical proofs that would otherwise be time-consuming and difficult, perhaps even practically impossible, for a human to check.Proof assistants have long had their fans, but this is the first time that they had a major role at the cutting edge of a field, says Kevin Buzzard, a mathematician at Imperial College London who was part of a collaboration that checked Scholze and Clausen’s result. “The big remaining question was: can they handle complex mathematics?” says Buzzard. “We showed that they can.”And it all happened much faster than anyone had imagined. Scholze laid out his challenge to proof-assistant experts in December 2020, and it was taken up by a group of volunteers led by Johan Commelin, a mathematician at the University of Freiburg in Germany. On 5 June — less than six months later — Scholze posted on Buzzard’s blog that the main part of the experiment had succeeded. “I find it absolutely insane that interactive proof assistants are now at the level that, within a very reasonable time span, they can formally verify difficult original research,” Scholze wrote.The crucial point of condensed mathematics, according to Scholze and Clausen, is to redefine the concept of topology, one of the cornerstones of modern maths. A lot of the objects that mathematicians study have a topology — a type of structure that determines which of the object’s parts are close together and which aren’t. Topology provides a notion of shape, but one that is more malleable than those of familiar, school-level geometry: in topology, any transformation that does not tear an object apart is admissible. For example, any triangle is topologically equivalent to any other triangle — or even to a circle — but not to a straight line.Topology plays a crucial part not only in geometry, but also in functional analysis, the study of functions. Functions typically ‘live’ in spaces with an infinite number of dimensions (such as wavefunctions, which are foundational to quantum mechanics). It is also important for number systems called p-adic numbers, which have an exotic, ‘fractal’ topology.A grand unificationAround 2018, Scholze and Clausen began to realize that the conventional approach to the concept of topology led to incompatibilities between these three mathematical universes — geometry, functional analysis and p-adic numbers — but that alternative foundations could bridge those gaps. Many results in each of those fields seem to have analogues in the others, even though they apparently deal with completely different concepts. But once topology is defined in the ‘correct’ way, the analogies between the theories are revealed to be instances of the same ‘condensed mathematics’, the two researchers proposed. “It is some kind of grand unification” of the three fields, Clausen says.Scholze and Clausen say they have already found simpler, ‘condensed’ proofs of a number of profound geometry facts, and that they can now prove theorems that were previously unknown. They have not yet made these public.There was one catch, however: to show that geometry fits into this picture, Scholze and Clausen had to prove one highly technical theorem about the set of ordinary real numbers, which has the topology of a straight line. “It’s like the foundational theorem that allows the real numbers to enter this new framework,” Commelin explains.In the proof-assistant package Lean, users enter mathematical statements based on simpler statements and concepts that are already in the Lean library.The output, seen here in the case of Scholze and Clausen’s key result, is a complex network.The statements have been colour-coded and grouped by subfield of maths.Credit: Patrick MassotClausen recalls how Scholze worked relentlessly on the proof until it was completed ‘through force of will’, producing many original ideas in the process. “It was the most amazing mathematical feat I’ve ever witnessed,” Clausen recalls. But the argument was so complex that Scholze himself worried there could be some subtle gap that invalidated the whole enterprise. “It looked convincing, but it was simply too novel,” says Clausen.For help checking that work, Scholze turned to Buzzard, a fellow number theorist who is an expert in Lean, a proof-assistant software package. Lean was originally created by a computer scientist at Microsoft Research in Redmond, Washington, for the purpose of rigorously checking computer code for bugs.Buzzard had been running a multi-year programme to encode the entire undergraduate maths curriculum at Imperial into Lean. He had also experimented with entering more-advanced mathematics into the system, including the concept of perfectoid spaces, which helped to earn Scholze a Fields Medal in 2018.Commelin, who is also a number theorist, took the lead in the effort to verify Scholze and Clausen’s proof. Commelin and Scholze decided to call their Lean project the Liquid Tensor Experiment, in an homage to progressive-rock band Liquid Tension Experiment, of which both mathematicians are fans.A febrile online collaboration ensued. A dozen or so mathematicians with experience in Lean joined in, and the researchers got help from computer scientists along the way. By early June, the team had fully translated the heart of Scholze’s proof — the part that worried him the most — into Lean. And it all checked out — the software was able to verify this part of the proof.Better understandingThe Lean version of Scholze’s proof comprises tens of thousands of lines of code, 100 times longer than the original version, Commelin says. “If you just look at the Lean code, you will have a very hard time understanding the proof, especially the way it is now.” But the researchers say that the effort of getting the proof to work in the computer has helped them to understand it better, too.Riehl is among the mathematicians who have experimented with proof assistants, and even teaches them in some of her undergraduate classes. She says that, although she doesn’t systematically use them in her research, they have begun to change the very way she thinks of the practices of constructing mathematical concepts and stating and proving theorems about them. “Previously, I thought of proving and constructing as of two different things, and now I think of them as the same.”Many researchers say that mathematicians are unlikely to be replaced by machines any time soon. Proof assistants can’t read a maths textbook, they need continuous input from humans, and they can’t decide whether a mathematical statement is interesting or profound — only whether it is correct, Buzzard says. Still, computers might soon be able to point out consequences of the known facts that mathematicians had failed to notice, he adds.Scholze says he was surprised by how far proof assistants could go, but that he is unsure whether they will continue to have a major role in his research. “For now, I can’t really see how they would help me in my creative work as a mathematician.”
Google reveals HOList, a platform for doing theorem proving research with deep learning-based methods:…In the future, perhaps more math theorems will be proved by AI systems than humans…Researchers with Google want to develop and test AI systems that can learn to solve mathematical theorems, so have made tweaks to theorem proving software to make it easier for AI systems to interface with. In addition, they’ve created a new theorem proving benchmark to spur development in this part of AI.HOList: The software they base their system on is called HOL Light. For this project, they develop “an instrumented, pre-packaged version of HOL Light that can be used as a large scale distributed environment of reinforcement learning for practical theorem proving using our new, well-defined, stable Python API”. This software ships with 41 “tactics” which are basically algorithms to use to help prove math theorems.Benchmarks: The researchers have also released a new benchmark on HOL Light, and they hope this will “enable research and measuring progress of AI driven theorem proving in large theories”. The benchmarks are initially designed to measure performance on a few tasks, including: predicting the same methodologies used by humans to create a proof; and trying to prove certain subgoals or aspects of proofs without access to full information.DeepHOL: They design a neural network-based theorem prover called DeepHOL which tries to concurrently encode the goals and premises while generating a proof. “In essence, we propose a hybrid architecture that both predicts the correct tactic to be applied, as well as rank the premise parameters required for meaningful application of tactics”. They test out a variety of different neural network-based approaches within this overall architecture and train them via reinforcement learning, with the best system able to prove 58% of the proofs in the training set – no slam-dunk, but very encouraging considering these are learning-based methods.Why this matters: Theorem proving feels like a very promising way to test the capabilities of increasingly advanced machines, especially if we’re able to develop systems that start to generate new proofs. This would be a clear validation of the ability for AI systems to create novel scientific insights in a specific domain, and I suspect would give us better intuitions about AI’s ability to transform science more generally as well. “We hope that our initial effort fosters collaboration and paves the way for strong and practical AI systems that can learn to reason efficiently in large formal theories,” they write.
OpenAI releases a formal mathematics benchmark:…MiniF2F: One benchmark for comparing multiple systems…OpenAI has built MiniF2F, a formal mathematics benchmark to evaluate and compare automated theorem proving systems based on different formal systems being targeted (e.g, Lean, Metamath). The benchmark is still in development and OpenAI is looking for feedback and plans to create a version 1 of the benchmark in the summer.Why this matters: Formal mathematics is an area where we’ve recently seen deep learning based methods cover surprising ground (e.g, Google has a system it uses called HOList for running AI-math experiments ImportAI: 142). Benchmarks like MiniF2F will make it easier to understand what kind of progress is being made here.