www.transicionestructural.NET es un nuevo foro, que a partir del 25/06/2012 se ha separado de su homónimo .COM. No se compartirán nuevos mensajes o usuarios a partir de dicho día.
0 Usuarios y 1 Visitante están viendo este tema.
Fusion Research Facility's Final Tritium Experiments Yield New Energy RecordPosted by BeauHD on Saturday February 10, 2024 @02:00AM from the historic-milestones dept.schwit1 quotes a report from Phys.Org:CitarThe Joint European Torus (JET), one of the world's largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy, while simultaneously setting a world record in energy output. These notable accomplishments represent a significant milestone in the field of fusion science and engineering. In JET's final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for five seconds, resulting in a ground-breaking record of 69 megajoules using a mere 0.2 milligrams of fuel.JET is a tokamak, a design which uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Most approaches to creating commercial fusion favor the use of two hydrogen variants -- deuterium and tritium. When deuterium and tritium fuse together they produce helium and vast amounts of energy, a reaction that will form the basis of future fusion powerplants. Dr. Fernanda Rimini, JET Senior Exploitation Manager, said, "We can reliably create fusion plasmas using the same fuel mixture to be used by commercial fusion energy powerplants, showcasing the advanced expertise developed over time."Professor Ambrogio Fasoli, Program Manager (CEO) at EUROfusion, said, "Our successful demonstration of operational scenarios for future fusion machines like ITER and DEMO, validated by the new energy record, instill greater confidence in the development of fusion energy. Beyond setting a new record, we achieved things we've never done before and deepened our understanding of fusion physics." Dr. Emmanuel Joffrin, EUROfusion Tokamak Exploitation Task Force Leader from CEA, said, "Not only did we demonstrate how to soften the intense heat flowing from the plasma to the exhaust, we also showed in JET how we can get the plasma edge into a stable state thus preventing bursts of energy reaching the wall. Both techniques are intended to protect the integrity of the walls of future machines. This is the first time that we've ever been able to test those scenarios in a deuterium-tritium environment."
The Joint European Torus (JET), one of the world's largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy, while simultaneously setting a world record in energy output. These notable accomplishments represent a significant milestone in the field of fusion science and engineering. In JET's final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for five seconds, resulting in a ground-breaking record of 69 megajoules using a mere 0.2 milligrams of fuel.JET is a tokamak, a design which uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Most approaches to creating commercial fusion favor the use of two hydrogen variants -- deuterium and tritium. When deuterium and tritium fuse together they produce helium and vast amounts of energy, a reaction that will form the basis of future fusion powerplants. Dr. Fernanda Rimini, JET Senior Exploitation Manager, said, "We can reliably create fusion plasmas using the same fuel mixture to be used by commercial fusion energy powerplants, showcasing the advanced expertise developed over time."Professor Ambrogio Fasoli, Program Manager (CEO) at EUROfusion, said, "Our successful demonstration of operational scenarios for future fusion machines like ITER and DEMO, validated by the new energy record, instill greater confidence in the development of fusion energy. Beyond setting a new record, we achieved things we've never done before and deepened our understanding of fusion physics." Dr. Emmanuel Joffrin, EUROfusion Tokamak Exploitation Task Force Leader from CEA, said, "Not only did we demonstrate how to soften the intense heat flowing from the plasma to the exhaust, we also showed in JET how we can get the plasma edge into a stable state thus preventing bursts of energy reaching the wall. Both techniques are intended to protect the integrity of the walls of future machines. This is the first time that we've ever been able to test those scenarios in a deuterium-tritium environment."
Computer Simulations of Atlantic Ocean Currents Finds Collapse Could Happen in Our LifetimePosted by EditorDavid on Sunday February 11, 2024 @05:28PM from the current-catastrophes dept.An anonymous reader shared this report from the Associated Press:CitarAn abrupt shutdown of Atlantic Ocean currents that could put large parts of Europe in a deep freeze is looking a bit more likely and closer than before as a new complex computer simulation finds a "cliff-like" tipping point looming in the future. A long-worried nightmare scenario, triggered by Greenland's ice sheet melting from global warming, still is at least decades away if not longer, but maybe not the centuries that it once seemed, a new study in Friday's Science Advances finds.The study, the first to use complex simulations and include multiple factors, uses a key measurement to track the strength of vital overall ocean circulation, which is slowing. A collapse of the current — called the Atlantic Meridional Overturning Circulation or AMOC — would change weather worldwide because it means a shutdown of one of key the climate and ocean forces of the planet. It would plunge northwestern European temperatures by 9 to 27 degrees (5 to 15 degrees Celsius) over the decades, extend Arctic ice much farther south, turn up the heat even more in the Southern Hemisphere, change global rainfall patterns and disrupt the Amazon, the study said. Other scientists said it would be a catastrophe that could cause worldwide food and water shortages."We are moving closer (to the collapse), but we we're not sure how much closer," said study lead author Rene van Westen, a climate scientist and oceanographer at Utrecht University in the Netherlands. "We are heading towards a tipping point." When this global weather calamity — grossly fictionalized in the movie "The Day After Tomorrow" — may happen is "the million-dollar question, which we unfortunately can't answer at the moment," van Westen said. He said it's likely a century away but still could happen in his lifetime. He just turned 30."It also depends on the rate of climate change we are inducing as humanity," van Westen said.
An abrupt shutdown of Atlantic Ocean currents that could put large parts of Europe in a deep freeze is looking a bit more likely and closer than before as a new complex computer simulation finds a "cliff-like" tipping point looming in the future. A long-worried nightmare scenario, triggered by Greenland's ice sheet melting from global warming, still is at least decades away if not longer, but maybe not the centuries that it once seemed, a new study in Friday's Science Advances finds.The study, the first to use complex simulations and include multiple factors, uses a key measurement to track the strength of vital overall ocean circulation, which is slowing. A collapse of the current — called the Atlantic Meridional Overturning Circulation or AMOC — would change weather worldwide because it means a shutdown of one of key the climate and ocean forces of the planet. It would plunge northwestern European temperatures by 9 to 27 degrees (5 to 15 degrees Celsius) over the decades, extend Arctic ice much farther south, turn up the heat even more in the Southern Hemisphere, change global rainfall patterns and disrupt the Amazon, the study said. Other scientists said it would be a catastrophe that could cause worldwide food and water shortages."We are moving closer (to the collapse), but we we're not sure how much closer," said study lead author Rene van Westen, a climate scientist and oceanographer at Utrecht University in the Netherlands. "We are heading towards a tipping point." When this global weather calamity — grossly fictionalized in the movie "The Day After Tomorrow" — may happen is "the million-dollar question, which we unfortunately can't answer at the moment," van Westen said. He said it's likely a century away but still could happen in his lifetime. He just turned 30."It also depends on the rate of climate change we are inducing as humanity," van Westen said.
CitarComputer Simulations of Atlantic Ocean Currents Finds Collapse Could Happen in Our LifetimePosted by EditorDavid on Sunday February 11, 2024 @05:28PM from the current-catastrophes dept.An anonymous reader shared this report from the Associated Press:CitarAn abrupt shutdown of Atlantic Ocean currents that could put large parts of Europe in a deep freeze is looking a bit more likely and closer than before as a new complex computer simulation finds a "cliff-like" tipping point looming in the future. A long-worried nightmare scenario, triggered by Greenland's ice sheet melting from global warming, still is at least decades away if not longer, but maybe not the centuries that it once seemed, a new study in Friday's Science Advances finds.The study, the first to use complex simulations and include multiple factors, uses a key measurement to track the strength of vital overall ocean circulation, which is slowing. A collapse of the current — called the Atlantic Meridional Overturning Circulation or AMOC — would change weather worldwide because it means a shutdown of one of key the climate and ocean forces of the planet. It would plunge northwestern European temperatures by 9 to 27 degrees (5 to 15 degrees Celsius) over the decades, extend Arctic ice much farther south, turn up the heat even more in the Southern Hemisphere, change global rainfall patterns and disrupt the Amazon, the study said. Other scientists said it would be a catastrophe that could cause worldwide food and water shortages."We are moving closer (to the collapse), but we we're not sure how much closer," said study lead author Rene van Westen, a climate scientist and oceanographer at Utrecht University in the Netherlands. "We are heading towards a tipping point." When this global weather calamity — grossly fictionalized in the movie "The Day After Tomorrow" — may happen is "the million-dollar question, which we unfortunately can't answer at the moment," van Westen said. He said it's likely a century away but still could happen in his lifetime. He just turned 30."It also depends on the rate of climate change we are inducing as humanity," van Westen said.Saludos.
Scientists Create DVD-Sized Disk Storing 1 Petabit (125,000 Gigabytes) of DataPosted by EditorDavid on Sunday February 25, 2024 @04:34PM from the disk-jockeying dept.Popular Science points out that for encoding data, "optical disks almost always offer just a single, 2D layer — that reflective, silver underside.""If you could boost a disk's number of available, encodable layers, however, you could hypothetically gain a massive amount of extra space..."CitarResearchers at the University of Shanghai for Science and Technology recently set out to do just that, and published the results earlier this week in the journal, Nature. Using a 54-nanometer laser, the team managed to record a 100 layers of data onto an optical disk, with each tier separated by just 1 micrometer. The final result is an optical disk with a three-dimensional stack of data layers capable of holding a whopping 1 petabit (Pb) of information — that's equivalent to 125,000 gigabytes of data...As Gizmodo offers for reference, that same petabit of information would require roughly a six-and-a-half foot tall stack of HHD drives — if you tried to encode the same amount of data onto Blu-rays, you'd need around 10,000 blank ones to complete your (extremely inefficient) challenge.To pull off their accomplishment, engineers needed to create an entirely new material for their optical disk's film... AIE-DDPR film utilizes a combination of specialized, photosensitive molecules capable of absorbing photonic data at a nanoscale level, which is then encoded using a high-tech dual-laser array. Because AIE-DDPR is so incredibly transparent, designers could apply layer-upon-layer to an optical disk without worrying about degrading the overall data. This basically generated a 3D "box" for digitized information, thus exponentially raising the normal-sized disk's capacity.Thanks to long-time Slashdot reader hackingbear for sharing the news.
Researchers at the University of Shanghai for Science and Technology recently set out to do just that, and published the results earlier this week in the journal, Nature. Using a 54-nanometer laser, the team managed to record a 100 layers of data onto an optical disk, with each tier separated by just 1 micrometer. The final result is an optical disk with a three-dimensional stack of data layers capable of holding a whopping 1 petabit (Pb) of information — that's equivalent to 125,000 gigabytes of data...As Gizmodo offers for reference, that same petabit of information would require roughly a six-and-a-half foot tall stack of HHD drives — if you tried to encode the same amount of data onto Blu-rays, you'd need around 10,000 blank ones to complete your (extremely inefficient) challenge.To pull off their accomplishment, engineers needed to create an entirely new material for their optical disk's film... AIE-DDPR film utilizes a combination of specialized, photosensitive molecules capable of absorbing photonic data at a nanoscale level, which is then encoded using a high-tech dual-laser array. Because AIE-DDPR is so incredibly transparent, designers could apply layer-upon-layer to an optical disk without worrying about degrading the overall data. This basically generated a 3D "box" for digitized information, thus exponentially raising the normal-sized disk's capacity.
CitarScientists Create DVD-Sized Disk Storing 1 Petabit (125,000 Gigabytes) of DataPosted by EditorDavid on Sunday February 25, 2024 @04:34PM from the disk-jockeying dept.Popular Science points out that for encoding data, "optical disks almost always offer just a single, 2D layer — that reflective, silver underside.""If you could boost a disk's number of available, encodable layers, however, you could hypothetically gain a massive amount of extra space..."CitarResearchers at the University of Shanghai for Science and Technology recently set out to do just that, and published the results earlier this week in the journal, Nature. Using a 54-nanometer laser, the team managed to record a 100 layers of data onto an optical disk, with each tier separated by just 1 micrometer. The final result is an optical disk with a three-dimensional stack of data layers capable of holding a whopping 1 petabit (Pb) of information — that's equivalent to 125,000 gigabytes of data...As Gizmodo offers for reference, that same petabit of information would require roughly a six-and-a-half foot tall stack of HHD drives — if you tried to encode the same amount of data onto Blu-rays, you'd need around 10,000 blank ones to complete your (extremely inefficient) challenge.To pull off their accomplishment, engineers needed to create an entirely new material for their optical disk's film... AIE-DDPR film utilizes a combination of specialized, photosensitive molecules capable of absorbing photonic data at a nanoscale level, which is then encoded using a high-tech dual-laser array. Because AIE-DDPR is so incredibly transparent, designers could apply layer-upon-layer to an optical disk without worrying about degrading the overall data. This basically generated a 3D "box" for digitized information, thus exponentially raising the normal-sized disk's capacity.Thanks to long-time Slashdot reader hackingbear for sharing the news.Saludos.
Hacia el control inteligente de los reactores de fusión nuclearPor Francisco R. Villatoro, el 26 febrero, 2024. Categoría(s): Ciencia • Nature • Noticias • Physics • ScienceLa construcción del reactor de fusión experimental ITER en Cadarache (Francia) debería finalizar en 2027. Mientras, se usan reactores más pequeños, como el reactor DIII-D del Centro Nacional de Fusión en San Diego, EEUU, para estudiar cómo optimizar los plasmas en ITER. Se publica en Nature un algoritmo de control inteligente basado en aprendizaje profundo por refuerzo para evitar la inestabilidad del plasma por el modo de rasgado neoclásico (NTM). Se ha controlado la inestabilidad asociada a la resonancia poloidal/toroidal 2/1 en un estado de DIII-D análogo al escenario de referencia de ITER (IBS); en este estado se producirá en ITER una potencia de fusión de 500 MW con una ganancia Q=10 durante 300 segundos. Este tipo de estudios podrían acelerar la operación de ITER para que alcance sus objetivos con éxito cuanto antes.La ventaja del control mediante una inteligencia artificial es que se actúa antes de que ocurra la inestabilidad, en lugar de intentar mitigar su efecto una vez se ha producido. Para ello hay que predecir en tiempo real cuándo va a ocurrir, lo que raya lo imposible usando modelos teóricos (las simulaciones de los plasmas requieren supercomputadores). El aprendizaje con refuerzo descubre patrones en los datos de diagnóstico experimentales (obtenidos por espectroscopia magnética, de dispersión de Thomson y de recombinación por intercambio de carga, como muestra la figura a la izquierda), que permiten detectar la inestabilidad de rasgado 2/1 unos 300 milisegundos antes de que se produzca . Mediante un bucle realimentado (figura a la derecha) con un retraso de 25 milisegundos se controlan los actuadores que calientan el plasma con haces de átomos neutros y con ondas de radiofrecuencia usando la resonancia electrón-ciclotrón para evitar que se produzca la inestabilidad (figura en el centro).La inestabilidad NTM es una de las más relevantes en la operación de un tokamak; sin embargo, hay muchas otras, cuyo control inteligente también será necesario estudiar en DIII-D. Además, el aprendizaje se ha basado en datos experimentales históricos de DIII-D, con lo que tendrá que ser repetido en otros reactores de fusión y con los datos futuros de ITER. El nuevo controlador es una prueba de concepto y queda mucho trabajo por hacer, pero todo apunta a que el control inteligente será usado en todos los futuros reactores de fusión comerciales. El artículo es Jaemin Seo, SangKyeun Kim, …, Egemen Kolemen, «Avoiding fusion plasma tearing instability with deep reinforcement learning,» Nature 626: 746-751 (21 Feb 2024), doi: https://doi.org/10.1038/s41586-024-07024-9. A nivel divulgativo recomiendo la nota de prensa de «Engineers use AI to wrangle fusion power for the grid,» Princeton Plasma Physics Laboratory (PPPL), 21 Feb 2024.[PS 01 mar 2024] Por cierto, no se trata de la primera vez que el control inteligente se usa en reactores de fusión. Gracias, Masgüel, por el comentario. Ya se publicó en 2022 en Nature un artículo de DeepMind (Google) que usaba control inteligente con aprendizaje con refuerzo para controlar un reactor de fusión (el pequeño reactor suizo llamado TCV (Tokamak à configuration variable), que tiene un radio mayor de 0.88 m, un radio menor de 0.25 m, un campo magnético máximo de 1.43 T y una potencia calórica de 4.3 MW. El entrenamiento se realizó con un simulador 2D de dicho reactor, pero se demostró su buen funcionamiento con experimentos reales (Jonas Degrave, …, Demis Hassabis, Martin Riedmiller, «Magnetic control of tokamak plasmas through deep reinforcement learning,» Nature 602: 414-419 (16 Feb 2022), doi: https://doi.org/10.1038/s41586-021-04301-9). Una estrategia similar (aprendizaje con un simulador) también se ha usado con el reactor coreano KSTAR (Korea Superconducting Tokamak Advanced Research), cuyo tamaño es similar a DIII-D (1.8 m, 0.5 m, 3.5 T y 14 MW), como se publicó en J. Seo, Y.-S. Na, …, Y.H. Lee, «Development of an operation trajectory design algorithm for control of multiple 0D parameters using deep reinforcement learning in KSTAR,» Nuclear Fusion 62: 086049 (07 Jul 2022), doi: https://doi.org/10.1088/1741-4326/ac79be).Y ya que estamos citando los antecedentes (que como siempre se adentran mucho en el pasado), también se publicó en Nature el uso de una inteligencia artificial con aprendizaje con refuerzo para predecir las inestabilidades del plasma (aunque sin control para evitar su aparición posterior). El aprendizaje se basó en datos experimentales de los reactores DIII-D (EEUU) y JET (Reino Unido): Julian Kates-Harbeck, Alexey Svyatkovskiy, William Tang, «Predicting disruptive instabilities in controlled fusion plasmas through deep learning,» Nature 568: 526-531 (17 Apr 2019), doi: https://doi.org/10.1038/s41586-019-1116-4. Com es bien conocido, todo avance tiene gran número de antecedentes. Una revisión más profunda de la bibliografía está más allá de mis objetivos en esta pieza divulgativa. [/PS]El plasma de deuterio en los reactores de fusión está sometido a gran número de inestabilidades magnetohidrodinámicas. En el caso de ITER, estudiarlas con todo detalle requerirá unos diez años (entre 2027 y 2037) antes de que se pueda pasar a los estudios de la fusión inyectando tritio. Muchas de estas inestabilidades conducen a la disrupción del plasma, que provoca una pérdida rápida de energía y la terminación brusca de la descarga. Esta energía perdida del plasma se disipa en las paredes, bobinas, etc., del tokamak, lo que puede producir daños. Para minimizarlos, hay que limitar la densidad, la presión y la corriente máximas del plasma. Su control inteligente permitirá relajar estos límites, garantizar una presión óptima del plasma y maximizar la producción de energía útil.En el nuevo artículo se ha estudiado a nivel experimental el control inteligente usando el pequeño reactor de fusión DIII-D, el mayor de EEUU; este tokamak tiene un radio mayor de 1.67 m, un radio menor de 0.67 m, un campo magnético toroidal máximo de 2.2 T y una potencia calórica de 23 MW. Estos valores se pueden comparar con el británico JET (2.96 m, 1.25 m, 3.45 T y 38 MW) o el futuro ITER (6.20 m, 2.00 m, 12 T y 320 MW). Estos pequeños reactores experimentales son muy útiles para muchos estudios de plasmas, mientras no funcione ITER. Cuando ITER inyecte su primer plasma, muchos de estos pequeños reactores empezarán a ir finalizando su periodo de funcionamiento y acabarán siendo desmantelados. Por ejemplo, el británico JET (Joint European Torus) finalizó sus operaciones en diciembre de 2023; hay una movilización científica para que continúe en funcionamiento, pero el gobierno británico hace oídos sordos.El algoritmo de aprendizaje por refuerzo recuerda a los que usan en robótica móvil para evitar obstáculos (la inestabilidad de rasgado sería el obstáculo a evitar). Las inestabilidades de rasgado m/n están asociadas a la resonancia entre modos poloidales y toroidales con números m y n, respectivamente; las más relevantes son el rasgado resistivo 1/1 (RTM) y el neoclásico 2/1 (NTM). Esta última limita la presión del plasma porque su origen es la llamada corriente de bootstrap, que se produce de forma espontánea cuando existe un gradiente de presión.La arquitectura de la red de neuronas artificiales profunda (DNN) usada se muestra en la figura. La entrada comprende los datos de diagnóstico experimentales (curvas unidimensionales asociadas a la espectroscopia magnética, de dispersión de Thomson y de recombinación por intercambio de carga) y el estado de los actuadores (que calientan del plasma). La salida es la presión normalizada del plasma (βN) y la rasgabilidad T (tearability). Durante el entrenamiento se ha usado una función de recompensa que usa un umbral k de rasgabilidad, que se fija en 0.2, 0.5, o 0.7 en este trabajo. La función de recompensa es R(βN, T; k) = βN > 0, si T<k, para maximizar la presión del plasma (esencial para una generación eficiente de energía de fusión). Para evitar el rasgado se penaliza con una recompensa negativa R(βN, T; k) = k − T < 0, si k > T. En futuros estudios que consideren otras inestabilidades del plasma habrá que usar una función de recompensa más complicada, que tenga en cuenta otros factores.El funcionamiento básico del algoritmo de control inteligente se ilustra en esta figura (obtenida mediante simulaciones por ordenador). Sin el algoritmo de control inteligente (curva negra a la izquierda), cuando la presión supera cierto límite se produce la inestabilidad NTM y la disrupción posterior del plasma. Gracias al control inteligente (curva azul) se controla el gradiente de la presión y la rasgabilidad, que oscilan entre ciertos márgenes evitando toda inestabilidad (que también puede aparecer cuando la presión baja demasiado, debido a la presencia de un gradiente de la presión demasiado negativo).Esta figura muestra datos reales de tres descargas experimentales: 193266 (color amarillo) es una descarga estable como referencia con un algoritmo de control realimentado estándar, 193273 (color negro) es una descarga inestable como referencia también con un algoritmo de control estándar y 193280 (color azul) es una cargada controlada por la inteligencia artificial. Se muestra la corriente del plasma (en MA, megaamperios), la potencia del plasma (en MW, megavatios), la triangularidad poloidal (factor geométrico adimensional que mide la forma triangular de la sección poloidal del plasma), las fluctuaciones magnéticas (en G, gauss) y la presión normalizada del plasma. En la descarga estable 193266 el algoritmo de control convencional tenía por objetivo βN = 1.7, mientras que en la descarga inestable 193273 el objetivo era βN = 2.3; demasiado, pues a los 2.6 segundos se produjo la inestabilidad de rasgado que llevó a la disrupción en 3.1 segundos. El control inteligente (con k = 0.5) logra que la descarga 193280 evite la inestabilidad logrando altas presiones mayores que las obtenidas en la descarga estable.El umbral k de rasgabilidad tiene que ser suficiente para evitar la inestabilidad. Esta figura muestra que para k = 0.2 (curva negra) el algoritmo de control mantuvo una baja rasgabilidad hasta los 5 segundos, pero se volvió inestable y acabó en una disrupción a los 5.5 s. El análisis posterior mostró que la inteligencia artificial predijo de forma correcta la inestabilidad, pero como la potencia del haz era demasiada baja, el mecanismo de actuación no pudo reducirla más y fue incapaz de evitarla. Para k = 0.5 (curva azul) y k = 0.7 (curva roja) el algoritmo de control inteligente logró evitar la inestabilidad con éxito. Como muestra la curva de presión normalizada, el mejor resultado se ha obtenido para k = 0.5. Los autores del artículo aclaran que no han intentado determinar el valor óptimo de este umbral, que será objeto de futuros estudios.En resumen, un artículo muy interesante que nos muestra que el estado del arte en el control de los plasmas en reactores de fusión (experimentales) pasa por usar técnicas de inteligencia artificial y aprendizaje profundo basado en redes de neuronas artificiales. Se ha logrado controlar una sola inestabilidad, pero no parece que haya obstrucciones a extender este control a muchas otras. El uso del control inteligente en ITER nos mostrará si esta tecnología acabará siendo el estándar en los futuros reactores de fusión experimentales (que llegarán en el último cuarto de este siglo). Pero siendo el siglo XXI el siglo de la inteligencia artificial, yo no tengo ninguna duda de ello.
New 'Water Batteries' Are Cheaper, Recyclable, And Won't ExplodePosted by BeauHD on Thursday March 07, 2024 @02:00AM from the new-and-shiny dept.Clare Watson reports via ScienceAlert:CitarBy replacing the hazardous chemical electrolytes used in commercial batteries with water, scientists have developed a recyclable 'water battery' -- and solved key issues with the emerging technology, which could be a safer and greener alternative. 'Water batteries' are formally known as aqueous metal-ion batteries. These devices use metals such as magnesium or zinc, which are cheaper to assemble and less toxic than the materials currently used in other kinds of batteries.Batteries store energy by creating a flow of electrons that move from the positive end of the battery (the cathode) to the negative end (the anode). They expend energy when electrons flow the opposite way. The fluid in the battery is there to shuttle electrons back and forth between both ends. In a water battery, the electrolytic fluid is water with a few added salts, instead of something like sulfuric acid or lithium salt. Crucially, the team behind this latest advancement came up with a way to prevent these water batteries from short-circuiting. This happens when tiny spiky metallic growths called dendrites form on the metal anode inside a battery, busting through battery compartments. [...]To inhibit this, the researchers coated the zinc anode of the battery with bismuth metal, which oxidizes to form rust. This creates a protective layer that stops dendrites from forming. The feature also helps the prototype water batteries last longer, retaining more than 85 percent of their capacity after 500 cycles, the researchers' experiments showed. According to Royce Kurmelovs at The Guardian, the team has so far developed water-based prototypes of coin-sized batteries used in clocks, as well as cylindrical batteries similar to AA or AAA batteries. The team is working to improve the energy density of their water batteries, to make them comparable to the compact lithium-ion batteries found inside pocket-sized devices. Magnesium is their preferred material, lighter than zinc with a greater potential energy density. [I]f magnesium-ion batteries can be commercialized, the technology could replace bulky lead-acid batteries within a few years.The study has been published in the journal Advanced Materials.
By replacing the hazardous chemical electrolytes used in commercial batteries with water, scientists have developed a recyclable 'water battery' -- and solved key issues with the emerging technology, which could be a safer and greener alternative. 'Water batteries' are formally known as aqueous metal-ion batteries. These devices use metals such as magnesium or zinc, which are cheaper to assemble and less toxic than the materials currently used in other kinds of batteries.Batteries store energy by creating a flow of electrons that move from the positive end of the battery (the cathode) to the negative end (the anode). They expend energy when electrons flow the opposite way. The fluid in the battery is there to shuttle electrons back and forth between both ends. In a water battery, the electrolytic fluid is water with a few added salts, instead of something like sulfuric acid or lithium salt. Crucially, the team behind this latest advancement came up with a way to prevent these water batteries from short-circuiting. This happens when tiny spiky metallic growths called dendrites form on the metal anode inside a battery, busting through battery compartments. [...]To inhibit this, the researchers coated the zinc anode of the battery with bismuth metal, which oxidizes to form rust. This creates a protective layer that stops dendrites from forming. The feature also helps the prototype water batteries last longer, retaining more than 85 percent of their capacity after 500 cycles, the researchers' experiments showed. According to Royce Kurmelovs at The Guardian, the team has so far developed water-based prototypes of coin-sized batteries used in clocks, as well as cylindrical batteries similar to AA or AAA batteries. The team is working to improve the energy density of their water batteries, to make them comparable to the compact lithium-ion batteries found inside pocket-sized devices. Magnesium is their preferred material, lighter than zinc with a greater potential energy density. [I]f magnesium-ion batteries can be commercialized, the technology could replace bulky lead-acid batteries within a few years.
Neuralink Shows First Brain-Chip Patient Playing Online Chess Using His MindPosted by BeauHD on Thursday March 21, 2024 @03:00AM from the mind-blowing dept.Neuralink, the brain-chip startup founded by Elon Musk, showed its first patient using his mind to play online chess. Reuters reports:CitarNoland Arbaugh, the 29-year-old patient who was paralyzed below the shoulder after a diving accident, played chess on his laptop and moved the cursor using the Neuralink device. The implant seeks to enable people to control a computer cursor or keyboard using only their thoughts. Arbaugh had received an implant from the company in January and could control a computer mouse using his thoughts, Musk said last month."The surgery was super easy," Arbaugh said in the video streamed on Musk's social media platform X, referring to the implant procedure. "I literally was released from the hospital a day later. I have no cognitive impairments. I had basically given up playing that game," Arbaugh said, referring to the game Civilization VI, "you all (Neuralink) gave me the ability to do that again and played for 8 hours straight."Elaborating on his experience with the new technology, Arbaugh said that it is "not perfect" and they "have run into some issues." "I don't want people to think that this is the end of the journey, there's still a lot of work to be done, but it has already changed my life," he added.
Noland Arbaugh, the 29-year-old patient who was paralyzed below the shoulder after a diving accident, played chess on his laptop and moved the cursor using the Neuralink device. The implant seeks to enable people to control a computer cursor or keyboard using only their thoughts. Arbaugh had received an implant from the company in January and could control a computer mouse using his thoughts, Musk said last month."The surgery was super easy," Arbaugh said in the video streamed on Musk's social media platform X, referring to the implant procedure. "I literally was released from the hospital a day later. I have no cognitive impairments. I had basically given up playing that game," Arbaugh said, referring to the game Civilization VI, "you all (Neuralink) gave me the ability to do that again and played for 8 hours straight."Elaborating on his experience with the new technology, Arbaugh said that it is "not perfect" and they "have run into some issues." "I don't want people to think that this is the end of the journey, there's still a lot of work to be done, but it has already changed my life," he added.
Researchers Develop New Material That Converts CO2 into Methanol Using SunlightPosted by EditorDavid on Saturday March 30, 2024 @01:34PM from the fun-with-photocatalysis dept."Researchers have successfully transformed CO2 into methanol," reports SciTechDaily, "by shining sunlight on single atoms of copper deposited on a light-activated material, a discovery that paves the way for creating new green fuels."CitarTara LeMercier, a PhD student who carried out the experimental work at the University of Nottingham, School of Chemistry, said: "We measured the current generated by light and used it as a criterion to judge the quality of the catalyst. Even without copper, the new form of carbon nitride is 44 times more active than traditional carbon nitride. However, to our surprise, the addition of only 1 mg of copper per 1 g of carbon nitride quadrupled this efficiency. Most importantly the selectivity changed from methane, another greenhouse gas, to methanol, a valuable green fuel."Professor Andrei Khlobystov, School of Chemistry, University of Nottingham, said: "Carbon dioxide valorization holds the key for achieving the net-zero ambition of the UK. It is vitally important to ensure the sustainability of our catalyst materials for this important reaction. A big advantage of the new catalyst is that it consists of sustainable elements — carbon, nitrogen, and copper — all highly abundant on our planet." This invention represents a significant step towards a deep understanding of photocatalytic materials in CO2 conversion. It opens a pathway for creating highly selective and tuneable catalysts where the desired product could be dialed up by controlling the catalyst at the nanoscale."The research has been published in the Sustainable Energy & Fuels journal of the Royal Society of Chemistry." Thanks to long-time Slashdot reader Baron_Yam for sharing the article.
Tara LeMercier, a PhD student who carried out the experimental work at the University of Nottingham, School of Chemistry, said: "We measured the current generated by light and used it as a criterion to judge the quality of the catalyst. Even without copper, the new form of carbon nitride is 44 times more active than traditional carbon nitride. However, to our surprise, the addition of only 1 mg of copper per 1 g of carbon nitride quadrupled this efficiency. Most importantly the selectivity changed from methane, another greenhouse gas, to methanol, a valuable green fuel."Professor Andrei Khlobystov, School of Chemistry, University of Nottingham, said: "Carbon dioxide valorization holds the key for achieving the net-zero ambition of the UK. It is vitally important to ensure the sustainability of our catalyst materials for this important reaction. A big advantage of the new catalyst is that it consists of sustainable elements — carbon, nitrogen, and copper — all highly abundant on our planet." This invention represents a significant step towards a deep understanding of photocatalytic materials in CO2 conversion. It opens a pathway for creating highly selective and tuneable catalysts where the desired product could be dialed up by controlling the catalyst at the nanoscale.
Method identified to double computer processing speedsDavid Danelski · 2024.03.21Imagine doubling the processing power of your smartphone, tablet, personal computer, or server using the existing hardware already in these devices. Hung-Wei Tseng, a UC Riverside associate professor of electrical and computer engineering, has laid out a paradigm shift in computer architecture to do just that in a recent paper titled, “Simultaneous and Heterogeneous Multithreading”.Hung-Wei TsengTseng explained that today’s computer devices increasingly have graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately, moving information from one processing unit to the next, which in effect creates a bottleneck.In their paper, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce what they call “simultaneous and heterogeneous multithreading” or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator. The system achieved a 1.96 times speedup and a 51% reduction in energy consumption. “You don’t have to add new processors because you already have them,” Tseng said. The implications are huge. Simultaneous use of existing processing components could reduce computer hardware costs while also reducing carbon emissions from the energy produced to keep servers running in warehouse-size data processing centers. It also could reduce the need for scarce freshwater used to keep servers cool.Tseng’s paper, however, cautions that further investigation is needed to answer several questions about system implementation, hardware support, code optimization, and what kind of applications stand to benefit the most, among other issues.The paper was presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture held in October in Toronto, Canada. The paper garnered recognition from Tseng’s professional peers in the Institute of Electrical and Electronics Engineers, or IEEE, who selected it as one of 12 papers included in the group’s “Top Picks from the Computer Architecture Conferences” issue to be published this coming summer.
Microsoft and Quantinuum Say They've Ushered in the Next Era of Quantum ComputingPosted by msmash on Wednesday April 03, 2024 @11:20AM from the moving-forward dept.Microsoft and Quantinuum today announced a major breakthrough in quantum error correction. Using Quantinuum's ion-trap hardware and Microsoft's new qubit-virtualization system, the team was able to run more than 14,000 experiments without a single error. From a report:CitarThis new system also allowed the team to check the logical qubits and correct any errors it encountered without destroying the logical qubits. This, the two companies say, has now moved the state-of-the-art of quantum computing out of what has typically been dubbed the era of Noisy Intermediate Scale Quantum (NISQ) computers."Noisy" because even the smallest changes in the environment can lead a quantum system to essentially become random (or "decohere"), and "intermediate scale" because the current generation of quantum computers is still limited to just over a thousand qubits at best. A qubit is the fundamental unit of computing in quantum systems, analogous to a bit in a classic computer, but each qubit can be in multiple states at the same time and doesn't fall into a specific position until measured, which underlies the potential of quantum to deliver a huge leap in computing power.It doesn't matter how many qubits you have, though, if you barely have time to run a basic algorithm before the system becomes too noisy to get a useful result -- or any result at all. Combining several different techniques, the team was able to run thousands of experiments with virtually no errors. That involved quite a bit of preparation and pre-selecting systems that already looked to be in good shape for a successful run, but still, that's a massive improvement from where the industry was just a short while ago.Further reading: Microsoft blog.
This new system also allowed the team to check the logical qubits and correct any errors it encountered without destroying the logical qubits. This, the two companies say, has now moved the state-of-the-art of quantum computing out of what has typically been dubbed the era of Noisy Intermediate Scale Quantum (NISQ) computers."Noisy" because even the smallest changes in the environment can lead a quantum system to essentially become random (or "decohere"), and "intermediate scale" because the current generation of quantum computers is still limited to just over a thousand qubits at best. A qubit is the fundamental unit of computing in quantum systems, analogous to a bit in a classic computer, but each qubit can be in multiple states at the same time and doesn't fall into a specific position until measured, which underlies the potential of quantum to deliver a huge leap in computing power.It doesn't matter how many qubits you have, though, if you barely have time to run a basic algorithm before the system becomes too noisy to get a useful result -- or any result at all. Combining several different techniques, the team was able to run thousands of experiments with virtually no errors. That involved quite a bit of preparation and pre-selecting systems that already looked to be in good shape for a successful run, but still, that's a massive improvement from where the industry was just a short while ago.
Groundbreaking Trial To Grow 'Mini Liver' From Patient's Own Lymph NodePosted by BeauHD on Wednesday April 03, 2024 @11:30PM from the one-of-a-kind-trials dept.An anonymous reader quotes a report from InterestingEngineering:CitarA Pittsburgh-based biotech company has started a one-of-a-kind trial in a patient with a failing liver. Their goal is to grow a functional second liver within the patient's body -- something never achieved before. If effective, it might be a life-saving therapy for those who require liver transplants but have to wait months for a compatible donor organ. LyGenesis is currently carrying out a trial in only one patient with end-stage liver disease (ESLD) to test the efficacy of their allogenic regenerative cell therapy. As per Nature, the experimental procedure was conducted in Houston on March 25. The report also states that the patient is "recovering well" after receiving the treatment. However, the formation of the new liver-like organ in the lymph node may take several months. Moreover, the individual will be kept on immunosuppressive drugs to prevent any initial rejection of the donor cells. The physicians will continue to monitor the patient's health closely.In this trial, scientists prepared donated hepatocyte cells for transplantation by suspending them in a solution. These cells were then transplanted into the patient's upper abdominal lymph nodes, which are tiny bean-shaped structures. These structures are an essential immune system component and filter waste from the body. Apart from the abdomen, lymph nodes are also found in the neck and chest. The team opted for a minimally invasive approach to inject the cells into the patient's lymph node via a catheter in the neck. "The lymph nodes then act as in vivo bioreactors, helping the hepatocytes to engraft, proliferate, and generate functional ectopic liver tissue," the press release noted. In simplest terms, these cells have the ability to multiply over the next several months. In a person with a failing liver, lymph nodes might operate as a second liver-like organ.
A Pittsburgh-based biotech company has started a one-of-a-kind trial in a patient with a failing liver. Their goal is to grow a functional second liver within the patient's body -- something never achieved before. If effective, it might be a life-saving therapy for those who require liver transplants but have to wait months for a compatible donor organ. LyGenesis is currently carrying out a trial in only one patient with end-stage liver disease (ESLD) to test the efficacy of their allogenic regenerative cell therapy. As per Nature, the experimental procedure was conducted in Houston on March 25. The report also states that the patient is "recovering well" after receiving the treatment. However, the formation of the new liver-like organ in the lymph node may take several months. Moreover, the individual will be kept on immunosuppressive drugs to prevent any initial rejection of the donor cells. The physicians will continue to monitor the patient's health closely.In this trial, scientists prepared donated hepatocyte cells for transplantation by suspending them in a solution. These cells were then transplanted into the patient's upper abdominal lymph nodes, which are tiny bean-shaped structures. These structures are an essential immune system component and filter waste from the body. Apart from the abdomen, lymph nodes are also found in the neck and chest. The team opted for a minimally invasive approach to inject the cells into the patient's lymph node via a catheter in the neck. "The lymph nodes then act as in vivo bioreactors, helping the hepatocytes to engraft, proliferate, and generate functional ectopic liver tissue," the press release noted. In simplest terms, these cells have the ability to multiply over the next several months. In a person with a failing liver, lymph nodes might operate as a second liver-like organ.