0 Members and 23 Guests are viewing this topic.
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed and that only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline.
At this point it should be clear that any new information must be related to preexisting common knowledge for it to be meaningful.
Due to traffic jam, it recommended to take an alternative route.
In information theory, one bit of information reduces the uncertainty by a half. To eliminate uncertainty entirely, we need infinite bits of information.
Data and Reasoning Fabric (DRF) could one day "assemble and provide useful information to autonomous vehicles in real time. The information system is being developed by NASA.Credit: NASA
In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence....
Quantum computers, you might have heard, are magical uber-machines that will soon cure cancer and global warming by trying all possible answers in different parallel universes. For 15 years, on my blog and elsewhere, I’ve railed against this cartoonish vision, trying to explain what I see as the subtler but ironically even more fascinating truth. I approach this as a public service and almost my moral duty as a quantum computing researcher. Alas, the work feels Sisyphean: The cringeworthy hype about quantum computers has only increased over the years, as corporations and governments have invested billions, and as the technology has progressed to programmable 50-qubit devices that (on certain contrived benchmarks) really can give the world’s biggest supercomputers a run for their money. And just as in cryptocurrency, machine learning and other trendy fields, with money have come hucksters.In reflective moments, though, I get it. The reality is that even if you removed all the bad incentives and the greed, quantum computing would still be hard to explain briefly and honestly without math. As the quantum computing pioneer Richard Feynman once said about the quantum electrodynamics work that won him the Nobel Prize, if it were possible to describe it in a few sentences, it wouldn’t have been worth a Nobel Prize.Not that that’s stopped people from trying. Ever since Peter Shor discovered in 1994 that a quantum computer could break most of the encryption that protects transactions on the internet, excitement about the technology has been driven by more than just intellectual curiosity. Indeed, developments in the field typically get covered as business or technology stories rather than as science ones.
Once someone understands these concepts, I’d say they’re ready to start reading — or possibly even writing — an article on the latest claimed advance in quantum computing. They’ll know which questions to ask in the constant struggle to distinguish reality from hype. Understanding this stuff really is possible — after all, it isn’t rocket science; it’s just quantum computing!
People talk about the death of semiconductors being able to shrink. IBM is laughing in your face - there's plenty of room, and plenty of density, and they've developed a proof of concept to showcase where the technology can go. Here's a look at IBM's new 2nm silicon.Intro0:00 The Future in 20240:26 What Nanometers Really Mean3:05 Transistor Density4:02 IBM on 2nm5:38 Comparing against current nodes7:00 What's on the chip7:40 Gate-All-Around Nanosheets8:45 Albany, NY9:16 Performance of 2nm9:42 Coming to Market and Pathfinding11:06 EUV and Future of EUV (Jim Keller)14:12 Minimum Specification: Bite a Wafer14:39 Cat Tax
We may be so familiar with the concept of numbers, especially the decimal based, since early ages that we often take it for granted.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
This result emphasizes the urgency of reliable sources of information that accurately and precisely represent objective reality as the ground truth.
the quality, condition, or fact of being exact and accurate."the deal was planned and executed with military precision"TECHNICALrefinement in a measurement, calculation, or specification, especially as represented by the number of digits given."a precision of six decimal figures"
the quality or state of being correct or precise."we have confidence in the accuracy of the statistics"TECHNICALthe degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard."the accuracy of radiocarbon dating"
According to Bas van Fraassen's voluntarist epistemology, the only constraint on rational belief is consistency. Beyond this, our beliefs must be guided not by rules of reason, but by the passions: emotions, values, and intuitions. This video examines the grounds for voluntarism in the failure of traditional epistemology, and in the need for an epistemology that can properly accommodate conceptual revolutions. Then I turn to the objections to voluntarism.Outline of voluntarism:0:00 - Introduction4:02 - Why consistency?8:13 - Failure of traditional epistemology18:37 - Voluntarism against skepticism31:26 - Conceptual revolution and objectifying epistemologyObjections to voluntarism:48:38 - Arbitrariness53:00 - Too permissive?1:01:34 - Too conservative?
Expression of the same numeric value but in different base number would give us different precision.
Quote from: hamdani yusuf on 14/06/2021 23:26:49Expression of the same numeric value but in different base number would give us different precision.Since binary is the smallest base number, it would be preferred to express precision. So, the precision of an information depends on how many bits its content is. In some programming languages, we can define a floating point variable using a single or double precision data type. So my assertion that precision of an information represents its data quantity is not an entirely new concept, although many forum members here didn't seem to agree.https://en.wikipedia.org/wiki/Single-precision_floating-point_formathttps://en.wikipedia.org/wiki/Double-precision_floating-point_format