0 Members and 1 Guest are viewing this topic.
Any of the quantum eraser experiments, which, given an interpretation where say photons have position and state which 'in flight', demonstrate that effects now are a function not only of immediate prior local state, but distant state and events that don't take place until long into the future.
You're proposing that our universe is a simulation that is running blackwards?
Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it.
I've not read most of this thread. It's quite long, but typical of such assertions, there is never an eye given to looking for problems with the proposal. Only positive evidence is presented. This is known as the selection bias fallacy.Address the problems. Actively seek them, else the idea will be shot down effortlessly when other do.
This thread is another spinoff from my earlier thread called universal utopia. This time I try to attack the problem from another angle, which is information theory point of view.I have started another thread related to this subject asking about quantification of accuracy and precision. It is necessary for us to be able to make comparison among available methods to describe some aspect of objective reality, and choose the best option based on cost and benefit consideration. I thought it was already a common knowledge, but the course of discussion shows it wasn't the case. I guess I'll have to build a new theory for that. It's unfortunate that the thread has been removed, so new forum members can't explore how the discussion developed.In my professional job, I have to deal with process control and automation, engineering and maintenance of electrical and instrumentation systems. It's important for us to explore the leading technologies and use them for our advantage to survive in the fierce industrial competition during this industrial revolution 4.0. One of the technology which is closely related to this thread is digital twin.https://en.m.wikipedia.org/wiki/Digital_twinJust like my other spinoff discussing about universal morality, which can be reached by expanding the groups who develop their own subjective morality to the maximum extent permitted by logic, here I also try to expand the scope of the virtualization of real world objects like digital twin in industrial sector to cover other fields as well. Hopefully it will lead us to global governance, because all conscious beings known today share the same planet. In the future the scope needs to expand even further because the exploration of other planets and solar systems is already on the way.
The progress to build better AI and toward AGI will eventually get closer to the realization of Laplace demon which is already predicted as technological singularity.QuoteThe better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.https://pathmind.com/wiki/neural-networkQuoteIn some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
The better we can predict, the better we can prevent and pre-empt. As you can see, with neural networks, we’re moving towards a world of fewer surprises. Not zero surprises, just marginally fewer. We’re also moving toward a world of smarter agents that combine neural networks with other algorithms like reinforcement learning to attain goals.
In some circles, neural networks are thought of as “brute force” AI, because they start with a blank slate and hammer their way through to an accurate model. They are effective, but to some eyes inefficient in their approach to modeling, which can’t make assumptions about functional dependencies between output and input.That said, gradient descent is not recombining every weight with every other to find the best match – its method of pathfinding shrinks the relevant weight space, and therefore the number of updates and required computation, by many orders of magnitude. Moreover, algorithms such as Hinton’s capsule networks require far fewer instances of data to converge on an accurate model; that is, present research has the potential to resolve the brute force nature of deep learning.
Quote from: Halc on 18/09/2021 01:42:23Musk argues for a virtual reality, not a simulation, despite whatever word he might choose for it. He literally used the word simulation in interviews and tweets. He's likely influenced by Nick Bostrom's idea.
The background of opening this thread can be found in my opening statement.
You seem to be proposing what I call a VR,
You can say that I have selection bias.
In training mode, AlphaGo runs as simulation, with pieces of Go moves around without representing any particular pieces of Go in reality. But in the trournament against Lee Sedol, it becomes a VR, where some of the pieces must represent Lee's pieces in real world.
https://pub.towardsai.net/facebooks-parlai-is-a-framework-for-building-human-like-conversational-agents-99711c351fc9Conversational interfaces powered by natural language processing(NLP) have been at the center of the artificial intelligence(AI) revolution of the last few years. When we see the advancements in digital assistants such as Siri or Alexa, we might be tempted to think that conversational applications are a solved problem. That couldn’t be further from the truth. The current generation of conversational interfaces is far from simulating human-like dialogues. Building advanced NLP systems remains an incredibly challenging task that. To address that challenge, Facebook open sourced ParlAI, a platform for advancing the evaluating of NLP systems. Recently, ParlAI got an update with new models, datasets, and a fun bot to play with which I would like to cover in this two-part article. The first part of the article will introduce the core concepts behind ParlAI while the second will focus on some of the newest capabilities targeted to advance dialogue research....The ultimate goal of NLP is to enable interactions with chatbots that mimic the dynamics of human conversations. For that to happen, we need systems that can go beyond understanding a single sentence or taking discrete actions. Advanced conversational applications require understanding long-form sentences in specific contexts while balancing human-like aspects such as specificity and empathy.
And then you select another video in support instead of one identifying the issues.
Sure, you can jam in a catheter, but how did you get in this virtual reality in the first place without knowing it?
A go-playing computer (AI or not) is not a VR. I suppose it could have a VR interface to let you experience playing the game with a physical-looking character, but to play an external entity, all it needs is a USB cord.
Quote from: Halc on 17/09/2021 02:23:59Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).Simulations can usually also work backward.
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).
Perhaps kidnapped when asleep, and use some anesthesia.
The biological agent can be just organoid brain, never really have complete body in the first place.
But virtual realities cannot.So such reverse-causality experiments seem to be a decent falsification of the VR hypothesis.
If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.
Are all the people you meet virtually controlled avatars like yourself, or are most of them NPC's or what? What about dogs or birds or gnats? What if I want to be one of those?
All these articles that your reference (digital twin, Musk's assertions, etc) are claims that it is a world like ours, humans doing it to other humans, not disembodied minds put into non-native virtual bodies.
The transition from simulation to VR is not a single step function. It's more gradual like greyscale.Let's start with a system which you can confidently say as VR. Then reduce its resolution in visualization, such as pixel number in the viewing window, or box size like in Minecraft. How low can we go until it stops being a VR?Another route to get the minimum requirement for VR is by reducing the degree of freedom that the external agent has to change the virtual objects. In 4d theater, the external agents have no control over the virtual objects. Other systems have various levels of control.
It looks like you forget that I'm not suggesting that we are currently living in a simulation nor VR.
If the VR is good enough, we can't distinguish between NPCs and avatars unless we can go outside of VR and meet them in person.
Quote from: Halc on 24/09/2021 17:51:27If I was suddenly drugged and wake up in a game, I think I'd notice. To the people already in the game, a new person suddenly appears out of nowhere. So to avoid that, you'd have to do them all at once.Do you always realize when you're dreaming?