0 Members and 1 Guest are viewing this topic.
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed and that only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline.
At this point it should be clear that any new information must be related to preexisting common knowledge for it to be meaningful.
Due to traffic jam, it recommended to take an alternative route.
In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
Data and Reasoning Fabric (DRF) could one day "assemble and provide useful information to autonomous vehicles in real time. The information system is being developed by NASA.Credit: NASA
In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence....
Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing!
People talk about the death of semiconductors being able to shrink. IBM is laughing in your face - there's plenty of room, and plenty of density, and they've developed a proof of concept to showcase where the technology can go. Here's a look at IBM's new 2nm silicon.Intro0:00 The Future in 20240:26 What Nanometers Really Mean3:05 Transistor Density4:02 IBM on 2nm5:38 Comparing against current nodes7:00 What's on the chip7:40 Gate-All-Around Nanosheets8:45 Albany, NY9:16 Performance of 2nm9:42 Coming to Market and Pathfinding11:06 EUV and Future of EUV (Jim Keller)14:12 Minimum Specification: Bite a Wafer14:39 Cat Tax
We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.