The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. How close are we from building a virtual universe?
« previous next »
  • Print
Pages: 1 ... 14 15 [16] 17 18 ... 65   Go Down

How close are we from building a virtual universe?

  • 1294 Replies
  • 348664 Views
  • 5 Tags

0 Members and 2 Guests are viewing this topic.

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #300 on: 27/09/2021 03:21:42 »
Quote from: Halc on 27/09/2021 02:26:56
Reality never feels like a dream.
Some dreams can feel like reality.
In some conditions, reality can feel like a dream, like when we're under influence of psychedelics. Lack of sleep can also do that.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #301 on: 27/09/2021 03:26:27 »
Quote from: Halc on 27/09/2021 02:26:56
There’s no proof that the universe wasn’t created last Tuesday, or 3 seconds ago for that matter.
We can rely on Occam's razor for practical matters. What do we get by believing that universe was created last Tuesday?
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #302 on: 27/09/2021 08:19:33 »
Quote from: Halc on 27/09/2021 04:38:01
Question is, what do you learn by attempting to falsify that the universe was created last Tuesday? If you shorten it to 'just now', it boils down to a Boltzmann brain. Just as hard to falsify that one.
Not much. It's just impractical and wasting resources without apparent benefits. So it would be better to just ignore it.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #303 on: 07/10/2021 07:17:00 »
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253?gi=98c60e44681b
GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3
Are there any limits to large neural networks?

Quote
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.
Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
However, the magnitude of this problem makes it arguably the single biggest scientific enterprise humanity has put its hands upon. Despite all the advances in computer science and artificial intelligence, no one knows how to solve it or when it’ll happen.
Some argue deep learning isn’t enough to achieve AGI. Stuart Russell, a computer science professor at Berkeley and AI pioneer, argues that “focusing on raw computing power misses the point entirely […] We don’t know how to make a machine really intelligent — even if it were the size of the universe.”
OpenAI, in contrast, is confident that large neural networks fed on large datasets and trained on huge computers are the best way towards AGI. Greg Brockman, OpenAI’s CTO, said in an interview for the Financial Times: “We think the most benefits will go to whoever has the biggest computer.”
And that’s what they did. They started training larger and larger models to awaken the hidden power within deep learning. The first non-subtle steps in this direction were the release of GPT and GPT-2. These large language models would set the groundwork for the star of the show: GPT-3. A language model 100 times larger than GPT-2, at 175 billion parameters.
GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most. And although some experts remained skeptical, large language models already felt strangely human. It was a huge leap forward for OpenAI researchers to reinforce their beliefs and convince us that AGI is a problem for deep learning.
Quote
Unlike GPT-3, it probably won’t be just a language model. Ilya Sutskever, the Chief Scientist at OpenAI, hinted about this when he wrote about multimodality in December 2020:
“In 2021, language models will start to become aware of the visual world. Text alone can express a great deal of information about the world, but it is incomplete, because we live in a visual world as well.”
We already saw some of this with DALL·E, a smaller version of GPT-3 (12 billion parameters), trained specifically on text-image pairs. OpenAI said then that “manipulating visual concepts through language is now within reach.”
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #304 on: 07/10/2021 13:12:05 »
https://psyche.co/ideas/the-brain-has-a-team-of-conductors-orchestrating-consciousness
Quote
This new framework points to a view of the brain as a fusion of the local and the global, arranged in a hierarchical manner. In this context, some researchers including Marsel Mesulam have suggested that the human brain is in fact hierarchically organised, a view that fits well with our orchestra metaphor. Yet, given the distributed nature of the brain hierarchy, there is unlikely to be just a single ‘conductor’. Instead, in 1988 the psychologist Bernard Baars proposed the concept of a ‘global workspace’, where information is integrated in a small group of brain regions (or ‘conductors’) before being broadcast to the whole brain.
Quote
This processing becomes ever more complex; higher up in the hierarchy, brain regions integrate all the small segments that make up an object, such as a human face. In his book The Man Who Mistook his Wife for a Hat (1985), Oliver Sacks wrote about what happens if you have a stroke or lesion to this brain area: namely, you’re no longer able to recognise faces.

Higher still in the hierarchical processing of environmental information there’s more integration, fusing different ongoing sensory modalities (such as sight and sound) with previous memories. This processing is further influenced by reward and expectations and by any surprising deviations from previous experiences. In other words, at the highest level of the hierarchy, the ‘global workspace’ must somehow integrate information from perceptual, long-term memory and evaluative and attentional systems to orchestrate goal-directed behaviour.

The information flow within this hierarchy is highly dynamic; not just bottom-up but also top-down. In fact, recurrent interactions shape the functional processing underlying cognition and behaviour. Much of this information flow follows the underlying anatomy in the structural connections between brain regions but, equally, the information flow is largely unconstrained by this anatomical wiring.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #305 on: 10/10/2021 07:31:52 »
https://jrodthoughts.medium.com/what-is-meta-reward-learning-4badbf2c95a8
Quote
Reinforcement learning has been at the center of some of the biggest artificial intelligence(AI) breakthroughs of the last five years. In mastering games like Go, Quake III or StarCraft, reinforcement learning models demonstrated that they can surpass human performance and create unique long-term strategies never explored before. Part of the magic of reinforcement learning relies on regularly rewarding the agents for actions that lead to a better outcome. That models works great in dense reward environments like games in which almost every action correspond to a specific feedback but what happens if that feedback is not available? In reinforcement learning this is known as sparse rewards environments and, unfortunately, it’s a representation of most real-world scenarios. A couple of years ago, researchers from Google published a new paper proposing a technique for achieving generalization with reinforcement learning that operate in sparse reward environments.

Quote
The overall challenge of reinforcement learning in sparse reward environment relies on achieving good generalization with limited feedback. More specifically, the process of achieving robust generalization in sparse reward environments can be summarized in two main challenges:
1) The Exploration — Exploitation Balance: An agent that operates using sparse rewards needs to balance when to take actions that lead to an immediate outcome versus when to explore the environment further in order to gather better intelligence. The exploration-exploitation dilemma is the fundamental balance that guides reinforcement learning agents.
2) Processing Unspecified Rewards: The absence of rewards in an environment is as difficult to manage as the surfacing of unspecified rewards. In sparse reward scenarios, agents are not always trained on specific types of rewards. After receiving a new feedback signal, a reinforcement learning agent needs to assess whether this one constitutes an indication of success or failure which is not always trivial.
Quote
Introducing MeRL
Meta Rewards Learning(MeRL) is Google’s proposed method for teaching reinforcement learning agents to generalize in environments with sparse rewards. The key contribution of MeRL is to effectively processing unspecified rewards without affecting the agent’s generalization performance. In our maze game example, an agent might accidentally arrive to a solution but, if it learns to perform spurious actions during training, it is likely to fail when provided with unseen instructions. To address this challenge, MeRL optimizes a more refined auxiliary reward function, which can differentiate between accidental and purposeful success based on features of action trajectories. The auxiliary reward is optimized by maximizing the trained agent’s performance on a hold-out validation set via meta learning.

I'd like to share this great article here. It contains important information which is also relevant to my other threads about universal morality and terminal goal. I decided to post it here because it emphasizes on technicality.
« Last Edit: 10/10/2021 08:27:46 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #306 on: 17/10/2021 08:46:35 »
Quote
https://venturebeat.com/2021/10/12/deepmind-is-developing-one-algorithm-to-rule-them-all/

The birth of neural algorithmic reasoning
Charles Blundell and Petar Veličković both hold senior research positions at DeepMind. They share a background in classical computer science and a passion for applied innovation. When Veličković met Blundell at DeepMind, a line of research known as Neural Algorithmic Reasoning (NAR), was born, after the homonymous position paper recently published by the duo.

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

The article shows how close we are from building a virtual universe of our own source of consciousness.

Quote
The ultimate goal is to build an observatory that can integrate data from all these projects into one grand, unified picture. Four years ago, with that in mind, researchers at the big-brain projects got together to create the International Brain Initiative, a loose organization with the principal task of helping neuroscientists to find ways to pool and analyse their data.
« Last Edit: 17/10/2021 12:54:37 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #307 on: 17/10/2021 12:39:18 »
https://www.nature.com/articles/d41586-021-02661-w
How the world’s biggest brain maps could transform neuroscience
Quote

Scientists around the world are working together to catalogue and map cells in the brain. What have these huge projects revealed about how it works?


Imagine looking at Earth from space and being able to listen in on what individuals are saying to each other. That’s about how challenging it is to understand how the brain works.

From the organ’s wrinkled surface, zoom in a million-fold and you’ll see a kaleidoscope of cells of different shapes and sizes, which branch off and reach out to each other. Zoom in a further 100,000 times and you’ll see the cells’ inner workings — the tiny structures in each one, the points of contact between them and the long-distance connections between brain areas.

Scientists have made maps such as these for the worm1 and fly2 brains, and for tiny parts of the mouse3 and human4 brains. But those charts are just the start. To truly understand how the brain works, neuroscientists also need to know how each of the roughly 1,000 types of cell thought to exist in the brain speak to each other in their different electrical dialects. With that kind of complete, finely contoured map, they could really begin to explain the networks that drive how we think and behave.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #308 on: 22/10/2021 13:37:17 »
Quote
Last year, the Max Planck Institute for Intelligent Systems organized the Real Robot Challenge, a competition that challenged academic labs to come up with solutions to the problem of repositioning and reorienting a cube using a low-cost robotic hand. The teams participating in the challenge were asked to solve a series of object manipulation problems with varying difficulty levels.
https://techxplore.com/news/2021-10-robotic-dexterous-skills-simulations-real.html

Quote
"Our objective was to use learning-based methods to solve the problem introduced in last year's Real Robot Challenge in a low-cost manner," Animesh Garg, one of the researchers who carried out the study, told TechXplore. "We are particularly inspired by previous work on OpenAI's Dactyl system, which showed that it is possible to use model free Reinforcement Learning in combination with Domain Randomization to solve complex manipulation tasks."

Quote
"The process we followed consists of four main steps: setting up the environment in physics simulation, choosing the correct parameterization for a problem specification, learning a robust policy and deploying our approach on a real robot," Garg explained. "First, we created a simulation environment corresponding to the real-world scenario we were trying to solve."

 It shows that having a relevant, accurate, and precise virtual universe can help improve the efficiency of our efforts to achieve goals.
« Last Edit: 22/10/2021 14:11:42 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #309 on: 26/10/2021 14:55:34 »
An update of current progress.

Google's Gated Multi-Layer Perceptron Outperforms Transformers Using Fewer Parameters
https://www.infoq.com/news/2021/10/google-mlp-vision-language/
Quote
Researchers at Google Brain have announced Gated Multi-Layer Perceptron (gMLP), a deep-learning model that contains only basic multi-layer perceptrons. Using fewer parameters, gMLP outperforms Transformer models on natural-language processing (NLP) tasks and achieves comparable accuracy on computer vision (CV) tasks.

The model and experiments were described in a paper published on arXiv. To investigate the necessity of the Transformer's self-attention mechanism, the team designed gMLP using only basic MLP layers combined with gating, then compared its performance on vision and language tasks to previous Transformer implementations. On the ImageNet image classification task, gMLP achieves an accuracy of 81.6, comparable to Vision Transformers (ViT) at 81.8, while using fewer parameters and FLOPs. For NLP tasks, gMLP achieves a better pre-training perplexity compared with BERT, and a higher F1 score on the SQuAD benchmark: 85.4 compared to BERT's 81.8, while using fewer parameters.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #310 on: 29/10/2021 07:39:48 »

Quote
Tesla has redefined how numbers are formatted for computers--especially for deep neural network training! In a recent white paper, Tesla proposed the CFloat format as a standard. What is CFloat? How are numbers stored in a computer? And what does all this have to do with bandwidth and memory and efficiency? Let's go into full nerd mode and find out!

Here's an example of real world application of information specifications: relevance, accuracy, and precision. To achieve efficiency, those parameters need to be balanced.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #311 on: 01/11/2021 06:12:02 »
China - Surveillance state or way of the future?
Quote
China is building a huge digital surveillance system. The state collects massive amounts of data from willing citizens: the benefits are practical, and people who play by the rules are rewarded.

Critics call it "the most ambitious Orwellian project in human history." China's digital surveillance system involves massive amounts of data being gathered by the state. In the so-called "brain" of Shanghai, for example, authorities have an eye on everything. On huge screens, they can switch to any of the approximately one million cameras, to find out who’s falling asleep behind the wheel, or littering, or not following Coronavirus regulations. "We want people to feel good here, to feel that the city is very safe," says Sheng Dandan, who helped design the "brain." Surveys suggest that most Chinese citizens are inclined to see benefits as opposed to risks: if algorithms can identify every citizen by their face, speech and even the way they walk, those breaking the law or behaving badly will have no chance. It’s incredibly convenient: a smartphone can be used to accomplish just about any task, and playing by the rules leads to online discounts thanks to a social rating system.

That's what makes Big Data so attractive, and not just in China. But where does the required data come from? Who owns it, and who is allowed to use it? The choice facing the Western world is whether to engage with such technology at the expense of social values, or ignore it, allowing others around the world to set the rules.
We need to determine and prioritize which social values are the most important, and which are expendable? It requires identification of common terminal goals. The universal terminal goal is the most common of them all.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #312 on: 07/11/2021 05:37:15 »
https://scitechdaily.com/surprisingly-smart-artificial-intelligence-sheds-light-on-how-the-brain-processes-language/
Quote
They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #313 on: 07/11/2021 10:30:33 »
http://email.mg.lesserwrong.com/c/eJw9jtsKgzAQRL8mvhk2G2P0IQ-F4n_k5qVVU5JY-_nVCoVhl1lmD-OU7rGteDEph9KDrWxpLNjScWhKaRpZSnRgmNAoeiQVzD4lH_cY1oHasBSjsjXq2rem6i3X2La84QZAAmus9EKLYlZjzq9E-I1gd2jfd3pi_pDjFrZMeLfFmfD7lUZx5sX5cQwdP9ObhjhczqTfRsYYBUAGRVSPLWXq9DrR8DyKDoue5pP-BRrdRE8

Quote
Reinforcement learning has achieved great success in many applications. However, sample efficiency remains a key challenge, with prominent methods requiring millions (or even billions) of environment steps to train. Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

We propose a sample efficient model-based visual RL algorithm built on MuZero, which we name EfficientZero. Our method achieves 190.4% mean human performance and 116.0% median performance on the Atari 100k benchmark with only two hours of real-time game experience and outperforms the state SAC in some tasks on the DMControl 100k benchmark. This is the first time an algorithm achieves super-human performance on Atari games with such little data. EfficientZero's performance is also close to DQN's performance at 200 million frames while we consume 500 times less data.

EfficientZero's low sample complexity and high performance can bring RL closer to real-world applicability. We implement our algorithm in an easy-to-understand manner and it is available at this https URL. We hope it will accelerate the research of MCTS-based RL algorithms in the wider community.

This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000).
The last innovation humans need to make is AI that's more effective and efficient in learning new things. We are getting closer to that point.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #314 on: 13/11/2021 22:04:31 »
Quote
https://pub.towardsai.net/openais-approach-to-solve-math-word-problems-b69ed6cc90de
OpenAI’s Approach to Solve Math Word Problems
A new research paper and dataset look to make progress in one of the toughest areas of deep learning.

Mathematical reasoning has long been considered one of the cornerstones of human cognition and one of the main bars to measure the “intelligence” of language models. Take the following problem:
“Anthony had 50 pencils. He gave 1/2 of his pencils to Brandon, and he gave 3/5 of the remaining pencils to Charlie. He kept the remaining pencils. How many pencils did Anthony keep?”
Yes, the solution is 10 pencils but that’s not the point 😉. Solving this problem does not only entail reasoning through the text but also orchestrating a sequence of steps to arrive at the solution. This dependency on language interpretability as well as the vulnerability to errors in the sequence of steps represents the two major challenges when building ML models that can solve math word problems. Recently, OpenAI published new research proposing an interesting method to tackle this type of problem.
It's another breakthrough towards the emergence of AGI.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #315 on: 15/11/2021 13:29:01 »
Microsoft Metaverse vs Facebook Metaverse (Watch the reveals)
Quote
Microsoft's Satya Nadella recently showcased his company's foray into the Metaverse at its Ignite conference. This comes on the heels of Facebook's recent Connect conference when Mark Zuckerberg announced he is changing its name to Meta, short for Metaverse.

See how both CEOs are moving full steam ahead with VR technologies that they hope will make it possible to collaborate easier in this digital space.

I think that they put too much emphasis on users' feelings and emotions, instead of necessities and functionalities, not to mention efficiency. But those are arguably one of the most reliable ways to generate revenue, and make people voluntarily reach deeper into their pockets.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #316 on: 19/11/2021 03:57:54 »
Quote from: hamdani yusuf on 18/11/2021 09:52:15
Quote from: Zer0 on 17/11/2021 19:11:01
If Artificial General Intelligence ever reaches Singularity...

Could then Humans leave the roles of Creating Social Laws, Upholding the Constitutional Values & seeing to it that they are being followed...

In short, could a Super A.I. then be a Leader, Judge & Cop? What they do are basically collecting and processing information to make decisions. Cops working at the field also have some physical things to do, but that's not really a big problem for AI.

Or would even AI learn the magic trick of corruption & start accepting rabbity bribes?
Creating proper Social Laws and Constitutional Values are instrumental goal to help achieving the terminal goal. Misidentification of the terminal goal, inaccurate perception of objective reality, or inaccurate cause and effect relationships among different things can bring unintended results.

In short, what could stop a Super A.I. from being a Leader, Judge & Cop?

What makes humans possessing power learn the magic trick of corruption & start accepting rabbity bribes? IMO, it's desire to get pleasure and avoid pain, which are meta rewards naturally emerged from evolutionary process. To prevent the AI from going to the same path, they must be assigned the appropriate terminal goal and meta rewards from the first time they are designed.

I decided to continue the topic here to avoid hijacking someone else's thread. Let's hear what the experts think and decide which side we agree more.

Quote
https://www.technologyreview.com/2020/03/27/950247/ai-debate-gary-marcus-danny-lange/
A debate between AI experts shows a battle over the technology’s future
The field is in disagreement about where it should go and why.

Since the 1950s, artificial intelligence has repeatedly overpromised and underdelivered. While recent years have seen incredible leaps thanks to deep learning, AI today is still narrow: it’s fragile in the face of attacks, can’t generalize to adapt to changing environments, and is riddled with bias. All these challenges make the technology difficult to trust and limit its potential to benefit society.

On March 26 at MIT Technology Review’s annual EmTech Digital event, two prominent figures in AI took to the virtual stage to debate how the field might overcome these issues.

Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, is a well-known critic of deep learning. In his book Rebooting AI, published last year, he argued that AI’s shortcomings are inherent to the technique. Researchers must therefore look beyond deep learning, he argues, and combine it with classical, or symbolic, AI—systems that encode knowledge and are capable of reasoning.

Danny Lange, the vice president of AI and machine learning at Unity, sits squarely in the deep-learning camp. He built his career on the technique’s promise and potential, having served as the head of machine learning at Uber, the general manager of Amazon Machine Learning, and a product lead at Microsoft focused on large-scale machine learning. At Unity, he now helps labs like DeepMind and OpenAI construct virtual training environments that teach their algorithms a sense of the world.

Danny, do you agree that we should be looking at these hybrid models?

Danny Lange: No, I do not agree. The issue I have with symbolic AI is its attempt to try to mimic the human brain in a very deep sense. It reminds me a bit of, you know, in the 18th century if you wanted faster transportation, you would work on building a mechanical horse rather than inventing the combustion engine. So I’m very skeptical of trying to solve AI by trying to mimic the human brain.
« Last Edit: 19/11/2021 04:00:07 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 



Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #317 on: 23/11/2021 05:37:41 »
System Dynamics: Systems Thinking and Modeling for a Complex World
Quote
This one-day workshop explores systems interactions in the real world, providing an introduction to the field of system dynamics. It also serves as a preview of the more in-depth coverage available in courses offered at MIT Sloan such as 15.871 Introduction to System Dynamics, 15.872 System Dynamics II, and 15.873 System Dynamics for Business and Policy.
Building a virtual universe is essentially unifying interrelated models to represent the complex world so we can make correct decisions to achieve our goals effectively and efficiently.
« Last Edit: 23/11/2021 05:58:45 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #318 on: 25/11/2021 08:34:20 »
Github Copilot: Good or Bad?
It seems like coding/programming/interface between humans and machines will be more accessible for more people.
Logged
Unexpected results come from false assumptions.
 

Offline hamdani yusuf (OP)

  • Naked Science Forum GOD!
  • *******
  • 11794
  • Activity:
    91%
  • Thanked: 285 times
Re: How close are we from building a virtual universe?
« Reply #319 on: 26/11/2021 12:16:59 »
Quote
https://www.business2community.com/online-marketing/googles-latest-ai-breakthrough-mum-02414144

In May 2021, Google unveiled a new search technology called Multitask Unified Model (MUM) at the Google I/O virtual event. This coincided with an article published on The Keyword, written by Vice President of Search, Pandu Nayak, detailing Google’s latest AI breakthrough.

In essence, MUM is an evolution of the same technology behind BERT but Google says the new model is 1,000 times more powerful than its predecessor. According to Pandu Nayak, MUM is designed to solve one of the biggest problems users face with search: “having to type out many queries and perform many searches to get the answer you need.”
Quote
Here’s how Pandu Nayak describes MUM in his announcement:

“Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models.”
We are witnessing a progress where machines will understand humans better than humans understand themselves.
Logged
Unexpected results come from false assumptions.
 



  • Print
Pages: 1 ... 14 15 [16] 17 18 ... 65   Go Up
« previous next »
Tags: virtual universe  / amazing technologies  / singularity  / future science  / conection 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.207 seconds with 64 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.