0 Members and 1 Guest are viewing this topic.
Consciousness is one of the brain’s most enigmatic mysteries. A new theory, inspired by thermodynamics, takes a high-level perspective of how neural networks in the brain transiently organize to give rise to memories, thought and consciousness.The key to awareness is the ebb and flow of energy: when neurons functionally tag together to support information processing, their activity patterns synchronize like ocean waves. This process is inherently guided by thermodynamic principles, which — like an invisible hand — promotes neural connections that favors conscious awareness. Disruptions in this process breaks down communication between neural networks, giving rise to neurological disorders such as epilepsy, autism or schizophrenia.
Brain’s ‘Background Noise’ May Hold Clues to Persistent MysteriesNEUROSCIENCEBrain’s ‘Background Noise’ May Hold Clues to Persistent MysteriesByELIZABETH LANDAUFebruary 8, 2021By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging and more.An illustration of a human brain against “pink noise” static.Olena Shmahalo/Quanta Magazine; noise generated by Thomas DonoghueAt a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’
The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.
Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
However, it was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.To understand this process, think of a “loss function” that describes the difference between the inferred and desired outputs as a landscape of hills and valleys. When a network makes an inference with a given set of synaptic weights, it ends up at some location on the loss landscape. To learn, it needs to move down the slope, or gradient, toward some valley, where the loss is minimized to the extent possible. Backpropagation is a method for updating the synaptic weights to descend that gradient.In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance. This calculation proceeds sequentially backward from the output layer to the input layer, hence the name backpropagation. Do this over and over for sets of inputs and desired outputs, and you’ll eventually arrive at an acceptable set of weights for the entire neural network.
Impossible for the BrainThe invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”Backprop is considered biologically implausible for several major reasons. The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial. The second is what computational neuroscientists call the weight transport problem: The backprop algorithm copies or “transports” information about all the synaptic weights involved in an inference and updates those weights for more accuracy. But in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output. From a neuron’s point of view, “it’s OK to know your own synaptic weights,” said Yamins. “What’s not okay is for you to know some other neuron’s set of synaptic weights.”
Artificial Neural Nets Finally Yield Clues to How Brains Learnhttps://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Predicting PerceptionsThe constraint that neurons can learn only by reacting to their local environment also finds expression in new theories of how the brain perceives. Beren Millidge, a doctoral student at the University of Edinburgh and a visiting fellow at the University of Sussex, and his colleagues have been reconciling this new view of perception — called predictive coding — with the requirements of backpropagation. “Predictive coding, if it’s set up in a certain way, will give you a biologically plausible learning rule,” said Millidge.Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
Pyramidal NeuronsSome scientists have taken on the nitty-gritty task of building backprop-like models based on the known properties of individual neurons. Standard neurons have dendrites that collect information from the axons of other neurons. The dendrites transmit signals to the neuron’s cell body, where the signals are integrated. That may or may not result in a spike, or action potential, going out on the neuron’s axon to the dendrites of post-synaptic neurons.But not all neurons have exactly this structure. In particular, pyramidal neurons — the most abundant type of neuron in the cortex — are distinctly different. Pyramidal neurons have a treelike structure with two distinct sets of dendrites. The trunk reaches up and branches into what are called apical dendrites. The root reaches down and branches into basal dendrites.
Models developed independently by Kording in 2001, and more recently by Blake Richards of McGill University and the Quebec Artificial Intelligence Institute and his colleagues, have shown that pyramidal neurons could form the basic units of a deep learning network by doing both forward and backward computations simultaneously. The key is in the separation of the signals entering the neuron for forward-going inference and for backward-flowing errors, which could be handled in the model by the basal and apical dendrites, respectively. Information for both signals can be encoded in the spikes of electrical activity that the neuron sends down its axon as an output.In the latest work from Richards’ team, “we’ve gotten to the point where we can show that, using fairly realistic simulations of neurons, you can train networks of pyramidal neurons to do various tasks,” said Richards. “And then using slightly more abstract versions of these models, we can get networks of pyramidal neurons to learn the sort of difficult tasks that people do in machine learning.”
The Role of AttentionAn implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons. But “there is no teacher in the brain that tells every neuron in the motor cortex, ‘You should be switched on and you should be switched off,’” said Pieter Roelfsema of the Netherlands Institute for Neuroscience in Amsterdam.
Roelfsema thinks the brain’s solution to the problem is in the process of attention. In the late 1990s, he and his colleagues showed that when monkeys fix their gaze on an object, neurons that represent that object in the cortex become more active. The monkey’s act of focusing its attention produces a feedback signal for the responsible neurons. “It is a highly selective feedback signal,” said Roelfsema. “It’s not an error signal. It is just saying to all those neurons: You’re going to be held responsible [for an action].”Roelfsema’s insight was that this feedback signal could enable backprop-like learning when combined with processes revealed in certain other neuroscientific findings. For example, Wolfram Schultz of the University of Cambridge and others have shown that when animals perform an action that yields better results than expected, the brain’s dopamine system is activated. “It floods the whole brain with neural modulators,” said Roelfsema. The dopamine levels act like a global reinforcement signal.In theory, the attentional feedback signal could prime only those neurons responsible for an action to respond to the global reinforcement signal by updating their synaptic weights, said Roelfsema. He and his colleagues have used this idea to build a deep neural network and study its mathematical properties. “It turns out you get error backpropagation. You get basically the same equation,” he said. “But now it became biologically plausible.”
In this video I react to a discussion from the Lex Fridman podcast with legendary chip designer Jim Keller (ex-Tesla) sharing their thoughts on computer vision, neural networks, Tesla's autopilot and full self driving software (and hardware), autonomous vehicles, deep learning and Tesla Dojo (Tesla's dojo is a training system).
A small European country is leading the world in establishing an “e-government” for its citizens.Estonia's fully online, e-government system has been revolutionary for the country's citizens, making tasks like voting, filing taxes, and renewing a driver’s license quick and convenient. In operation since 2001, “e-Estonia” is now a well-oiled, digital machine. Estonia was the first country to hold a nationwide election online, and ministers dictate decisions via an e-Cabinet. Estonia was also the first country to declare internet access a human right. 99% of public services are available digitally 24/7, excluding only marriage, divorce, and real-estate transactions.
The concepts underpinning vector databases are decades old, but it is only relatively recently that these are the underlying “secret weapon” of the largest webscale companies that provide services like search and near real-time recommendations.Like all good clandestine competitive tools, the vector databases that support these large companies are all purpose-built in-house, optimized for the types of similarity search operations native to their business (content, physical products, etc.).These custom-tailored vector databases are the “unsung hero of big machine learning,” says Edo Liberty, who built tools like this at Yahoo Research during its scalable machine learning platform journey. He carried some of this over to AWS, where he ran Amazon AI labs and helped cobble together standards like AWS Sagemaker, all the while learning how vector databases could integrate with other platforms and connect with the cloud.“Vector databases are a core piece of infrastructure that fuels every big machine learning deployment in industry. There was never a way to do this directly, everyone just had to build their own in-house,” he tells The Next Platform. The funny thing is, he was working on high dimensional geometry during his PhD days; the AI/ML renaissance just happened to perfectly intersect with exactly that type of work.“In ML, suddenly everything was being represented as these high-dimensional vectors, that quickly became a huge source of data, so it you want to search, rank or give recommendations, the object in your actual database wasn’t a document or an image—it was this mathematical representation of the machine learning model.” In short, this quickly became important for a lot of companies.
The Senate filibuster is one of the biggest things standing in the way of anti-voter suppression laws, raising the minimum wage and immigration reform. What is this loophole, and how does it affect governing today?
Maths nerds, get ready: an AI is about to write its own proofsWe'll see the first truly creative proof of a mathematical theorem written by an artificial intelligence – and soonIt might come as a surprise to some people that this prediction hasn’t already come to pass. Given that mathematics is a subject of logic and precision, it would seem to be perfect territory for a computer.However, in 2021, we will see the first truly creative proof of a mathematical theorem by an artificial intelligence (AI). As a mathematician, this fills me with excitement and anxiety in equal measure. Excitement for the new insights that AI might give the mathematical community; anxiety that we human mathematicians might soon become obsolete. But part of this belief is based on a misconception about what a mathematician does.More recently, techniques of machine learning have been used to gain an understanding from a database of successful proofs to generate more proofs. But although the proofs are new, they do not pass the test of exciting the mathematical mind. It’s the same for powerful algorithms, which can generate convincing short-form text, but are a long way from writing a novel.But in 2021 I think we will see – or at least be close to – an algorithm with the ability to write its first mathematical story. Storytelling through the written word is based on millions of years of human evolution, and it takes a human many years to reach the maturity to write a novel. But mathematics is a much younger evolutionary development. A person immersed in the mathematical world can reach maturity quite quickly, which is why one sees mathematical breakthroughs made by young minds.This is why I think that it won’t take long for an AI to understand the quality of the proofs we love and celebrate, before it too will be writing proofs. Perhaps, given its internal architecture, these may be mathematical theorems about networks – a subject that deserves its place on the shelves of the mathematical libraries we humans have been filling for centuries.
What is love and what defines art? Humans have theorized, debated, and argued over these questions for centuries. As researchers become closer and closer to boiling these concepts down to a science, A.I. projects become closer to becoming alternatives for romantic companions and artists in their own right. The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world. 0:00 Introduction0:50 The Model Companion11:02 Can A.I. Make Real Art?23:05 The Autonomous Supercar36:41 The Hard Problem