The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. How close are we from building a virtual universe?
« previous next »
  • Print
Pages: 1 ... 3 4 [5] 6 7 ... 65   Go Down

How close are we from building a virtual universe?

  • 1292 Replies
  • 343306 Views
  • 5 Tags

0 Members and 131 Guests are viewing this topic.

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #80 on: 05/02/2021 06:40:03 »
This is why we need the ability to distinguish between objective reality vs alternative reality.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #81 on: 09/02/2021 12:33:13 »
To simulate the universe, it is necessary to simulate consciousness as well, and we need to understand it first.

A new theory of brain organization takes aim at the mystery of consciousness

https://neurosciencenews.com/brain-organization-consciousness-15132/
Quote
Consciousness is one of the brain’s most enigmatic mysteries. A new theory, inspired by thermodynamics, takes a high-level perspective of how neural networks in the brain transiently organize to give rise to memories, thought and consciousness.

The key to awareness is the ebb and flow of energy: when neurons functionally tag together to support information processing, their activity patterns synchronize like ocean waves. This process is inherently guided by thermodynamic principles, which — like an invisible hand — promotes neural connections that favors conscious awareness. Disruptions in this process breaks down communication between neural networks, giving rise to neurological disorders such as epilepsy, autism or schizophrenia.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #82 on: 11/02/2021 10:59:28 »
https://www.quantamagazine.org/brains-background-noise-may-hold-clues-to-persistent-mysteries-20210208/

Quote

Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries

NEUROSCIENCE
Brain’s ‘Background Noise’ May Hold Clues to Persistent Mysteries
By
ELIZABETH LANDAU
February 8, 2021

By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging and more.

An illustration of a human brain against “pink noise” static.
Olena Shmahalo/Quanta Magazine; noise generated by Thomas Donoghue
At a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.

Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.

Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’
« Last Edit: 11/02/2021 11:01:57 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #83 on: 15/02/2021 12:53:10 »
Mind Reading For Brain-To-Text Communication!
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #84 on: 25/02/2021 05:08:57 »
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.

Quote
Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.

But real brains are highly unlikely to be relying on the same algorithm. It’s not just that “brains are able to generalize and learn better and faster than the state-of-the-art AI systems,” said Yoshua Bengio, a computer scientist at the University of Montreal, the scientific director of the Quebec Artificial Intelligence Institute and one of the organizers of the 2007 workshop. For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.

Quote
However, it was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.

The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
To understand this process, think of a “loss function” that describes the difference between the inferred and desired outputs as a landscape of hills and valleys. When a network makes an inference with a given set of synaptic weights, it ends up at some location on the loss landscape. To learn, it needs to move down the slope, or gradient, toward some valley, where the loss is minimized to the extent possible. Backpropagation is a method for updating the synaptic weights to descend that gradient.

In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance. This calculation proceeds sequentially backward from the output layer to the input layer, hence the name backpropagation. Do this over and over for sets of inputs and desired outputs, and you’ll eventually arrive at an acceptable set of weights for the entire neural network.
Quote
Impossible for the Brain
The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”

Backprop is considered biologically implausible for several major reasons. The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial. The second is what computational neuroscientists call the weight transport problem: The backprop algorithm copies or “transports” information about all the synaptic weights involved in an inference and updates those weights for more accuracy. But in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output. From a neuron’s point of view, “it’s OK to know your own synaptic weights,” said Yamins. “What’s not okay is for you to know some other neuron’s set of synaptic weights.”

Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #85 on: 25/02/2021 05:28:05 »
Quote from: hamdani yusuf on 25/02/2021 05:08:57
Artificial Neural Nets Finally Yield Clues to How Brains Learn
https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/
Quote
Predicting Perceptions
The constraint that neurons can learn only by reacting to their local environment also finds expression in new theories of how the brain perceives. Beren Millidge, a doctoral student at the University of Edinburgh and a visiting fellow at the University of Sussex, and his colleagues have been reconciling this new view of perception — called predictive coding — with the requirements of backpropagation. “Predictive coding, if it’s set up in a certain way, will give you a biologically plausible learning rule,” said Millidge.

Predictive coding posits that the brain is constantly making predictions about the causes of sensory inputs. The process involves hierarchical layers of neural processing. To produce a certain output, each layer has to predict the neural activity of the layer below. If the highest layer expects to see a face, it predicts the activity of the layer below that can justify this perception. The layer below makes similar predictions about what to expect from the one beneath it, and so on. The lowest layer makes predictions about actual sensory input — say, the photons falling on the retina. In this way, predictions flow from the higher layers down to the lower layers.
« Last Edit: 25/02/2021 09:05:05 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #86 on: 25/02/2021 09:05:16 »
Quote
Pyramidal Neurons
Some scientists have taken on the nitty-gritty task of building backprop-like models based on the known properties of individual neurons. Standard neurons have dendrites that collect information from the axons of other neurons. The dendrites transmit signals to the neuron’s cell body, where the signals are integrated. That may or may not result in a spike, or action potential, going out on the neuron’s axon to the dendrites of post-synaptic neurons.

But not all neurons have exactly this structure. In particular, pyramidal neurons — the most abundant type of neuron in the cortex — are distinctly different. Pyramidal neurons have a treelike structure with two distinct sets of dendrites. The trunk reaches up and branches into what are called apical dendrites. The root reaches down and branches into basal dendrites.

Quote
Models developed independently by Kording in 2001, and more recently by Blake Richards of McGill University and the Quebec Artificial Intelligence Institute and his colleagues, have shown that pyramidal neurons could form the basic units of a deep learning network by doing both forward and backward computations simultaneously. The key is in the separation of the signals entering the neuron for forward-going inference and for backward-flowing errors, which could be handled in the model by the basal and apical dendrites, respectively. Information for both signals can be encoded in the spikes of electrical activity that the neuron sends down its axon as an output.

In the latest work from Richards’ team, “we’ve gotten to the point where we can show that, using fairly realistic simulations of neurons, you can train networks of pyramidal neurons to do various tasks,” said Richards. “And then using slightly more abstract versions of these models, we can get networks of pyramidal neurons to learn the sort of difficult tasks that people do in machine learning.”
There are so much information densely packed into a single article. I found it hard to compress it any further.
« Last Edit: 02/03/2021 08:06:37 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #87 on: 25/02/2021 09:18:48 »
Quote
The Role of Attention
An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons. But “there is no teacher in the brain that tells every neuron in the motor cortex, ‘You should be switched on and you should be switched off,’” said Pieter Roelfsema of the Netherlands Institute for Neuroscience in Amsterdam.
Quote
Roelfsema thinks the brain’s solution to the problem is in the process of attention. In the late 1990s, he and his colleagues showed that when monkeys fix their gaze on an object, neurons that represent that object in the cortex become more active. The monkey’s act of focusing its attention produces a feedback signal for the responsible neurons. “It is a highly selective feedback signal,” said Roelfsema. “It’s not an error signal. It is just saying to all those neurons: You’re going to be held responsible [for an action].”

Roelfsema’s insight was that this feedback signal could enable backprop-like learning when combined with processes revealed in certain other neuroscientific findings. For example, Wolfram Schultz of the University of Cambridge and others have shown that when animals perform an action that yields better results than expected, the brain’s dopamine system is activated. “It floods the whole brain with neural modulators,” said Roelfsema. The dopamine levels act like a global reinforcement signal.

In theory, the attentional feedback signal could prime only those neurons responsible for an action to respond to the global reinforcement signal by updating their synaptic weights, said Roelfsema. He and his colleagues have used this idea to build a deep neural network and study its mathematical properties. “It turns out you get error backpropagation. You get basically the same equation,” he said. “But now it became biologically plausible.”
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #88 on: 02/03/2021 08:05:07 »
Imagine how much you can gain just from the stock market, if you have clear insight of what would happen in the future.
This video was from 2010.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #89 on: 04/03/2021 11:49:00 »
Taming Transformers for High-Resolution Image Synthesis

It seems like we are getting better at building information processors comparable to human brains.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #90 on: 04/03/2021 12:25:39 »
In not so distant future, most information available online will be generated by AI.

That prediction will force us to build a virtual universe which is intended to accurately represent objective reality. Otherwise, there will be no way to distinguish facts from fictions, especially for something which are not widely known already.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #91 on: 07/03/2021 07:31:14 »
Has Google Search changed much since 1998?
This video shows how Google has evolved to getting closer to building a virtual universe.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #92 on: 12/03/2021 08:58:40 »
Tesla's Autopilot, Full Self Driving, Neural Networks & Dojo
Quote
In this video I react to a discussion from the Lex Fridman podcast with legendary chip designer Jim Keller (ex-Tesla) sharing their thoughts on computer vision, neural networks, Tesla's autopilot and full self driving software (and hardware), autonomous vehicles, deep learning and Tesla Dojo (Tesla's dojo is a training system).
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #93 on: 15/03/2021 12:08:20 »
More reason to replace the lawmakers with AI.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #94 on: 16/03/2021 02:44:55 »
The Most Advanced Digital Government in the World
Quote
A small European country is leading the world in establishing an “e-government” for its citizens.

Estonia's fully online, e-government system has been revolutionary for the country's citizens, making tasks like voting, filing taxes, and renewing a driver’s license quick and convenient.

In operation since 2001, “e-Estonia” is now a well-oiled, digital machine. Estonia was the first country to hold a nationwide election online, and ministers dictate decisions via an e-Cabinet.

Estonia was also the first country to declare internet access a human right. 99% of public services are available digitally 24/7, excluding only marriage, divorce, and real-estate transactions.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #95 on: 18/03/2021 22:43:41 »
https://www.nextplatform.com/2021/03/11/its-time-to-start-paying-attention-to-vector-databases/amp/

Quote
The concepts underpinning vector databases are decades old, but it is only relatively recently that these are the underlying “secret weapon” of the largest webscale companies that provide services like search and near real-time recommendations.

Like all good clandestine competitive tools, the vector databases that support these large companies are all purpose-built in-house, optimized for the types of similarity search operations native to their business (content, physical products, etc.).

These custom-tailored vector databases are the “unsung hero of big machine learning,” says Edo Liberty, who built tools like this at Yahoo Research during its scalable machine learning platform journey. He carried some of this over to AWS, where he ran Amazon AI labs and helped cobble together standards like AWS Sagemaker, all the while learning how vector databases could integrate with other platforms and connect with the cloud.

“Vector databases are a core piece of infrastructure that fuels every big machine learning deployment in industry. There was never a way to do this directly, everyone just had to build their own in-house,” he tells The Next Platform. The funny thing is, he was working on high dimensional geometry during his PhD days; the AI/ML renaissance just happened to perfectly intersect with exactly that type of work.

“In ML, suddenly everything was being represented as these high-dimensional vectors, that quickly became a huge source of data, so it you want to search, rank or give recommendations, the object in your actual database wasn’t a document or an image—it was this mathematical representation of the machine learning model.” In short, this quickly became important for a lot of companies.
I think that the virtual universe would be built upon vector database foundation at its core system. This assessment is based on my experience in some system migration projects, which pushed me to reverse engineer a system database to make a tool to accelerate the process by automating some tasks.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #96 on: 19/03/2021 09:58:17 »
Quote
The Senate filibuster is one of the biggest things standing in the way of anti-voter suppression laws, raising the minimum wage and immigration reform. What is this loophole, and how does it affect governing today?
Lawmaking process obviously needs to get more efficient.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #97 on: 19/03/2021 21:36:28 »
ISO standard basically said that you've got to document everything and track it. Write what you do, do what you write.
What you write is a virtual version of what you do. In the past, they are on papers. Now they are in computer data storages.
This virtual version of the real world supposed to be easier to process, aggregate, simulate, extract, to produce required information in decision making process. To be useful, they must have adequate accuracy and precision.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #98 on: 23/03/2021 02:58:52 »
https://www.wired.co.uk/article/marcus-du-sautoy-maths-proofs
Quote
Maths nerds, get ready: an AI is about to write its own proofs
We'll see the first truly creative proof of a mathematical theorem written by an artificial intelligence – and soon

It might come as a surprise to some people that this prediction hasn’t already come to pass. Given that mathematics is a subject of logic and precision, it would seem to be perfect territory for a computer.

However, in 2021, we will see the first truly creative proof of a mathematical theorem by an artificial intelligence (AI). As a mathematician, this fills me with excitement and anxiety in equal measure. Excitement for the new insights that AI might give the mathematical community; anxiety that we human mathematicians might soon become obsolete. But part of this belief is based on a misconception about what a mathematician does.

More recently, techniques of machine learning have been used to gain an understanding from a database of successful proofs to generate more proofs. But although the proofs are new, they do not pass the test of exciting the mathematical mind. It’s the same for powerful algorithms, which can generate convincing short-form text, but are a long way from writing a novel.

But in 2021 I think we will see – or at least be close to – an algorithm with the ability to write its first mathematical story. Storytelling through the written word is based on millions of years of human evolution, and it takes a human many years to reach the maturity to write a novel. But mathematics is a much younger evolutionary development. A person immersed in the mathematical world can reach maturity quite quickly, which is why one sees mathematical breakthroughs made by young minds.


This is why I think that it won’t take long for an AI to understand the quality of the proofs we love and celebrate, before it too will be writing proofs. Perhaps, given its internal architecture, these may be mathematical theorems about networks – a subject that deserves its place on the shelves of the mathematical libraries we humans have been filling for centuries.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11790
  • Activity:
    89%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #99 on: 24/03/2021 09:13:53 »
Quote
What is love and what defines art? Humans have theorized, debated, and argued over these questions for centuries. As researchers become closer and closer to boiling these concepts down to a science, A.I. projects become closer to becoming alternatives for romantic companions and artists in their own right.

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00​ Introduction
0:50​ The Model Companion
11:02​ Can A.I. Make Real Art?
23:05​ The Autonomous Supercar
36:41​ The Hard Problem
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 3 4 [5] 6 7 ... 65   Go Up
« previous next »
Tags: virtual universe  / amazing technologies  / singularity  / future science  / conection 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.428 seconds with 65 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.