0 Members and 1 Guest are viewing this topic.
I don't think that you are safe thinking that way.
Quote from: hamdani yusuf on 05/07/2021 05:40:06I don't think that you are safe thinking that way. I think that if an asteroid was to collide with the earth that would be proof of a very evil computer programmer in our virtual universe. This would be like the devil in a real universe.
Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.
Quote from: hamdani yusuf on 05/07/2021 07:46:29Whatever is done in a virtual universe can't be said to be evil or good until it has some effect in real universe.So what your saying is that an incoming asteroid can leave the virtual universe and collide into the real universe or at least a house in the real universe. This is the some effect that you say my happen. This would be a very dangerous computer simulator we better warn the pilots that are using flite simulators as it could turn out to be a real crash as they train in their simulators.
If your flight simulator contains bugs that makes training pilots to react differently than what they should do in real life, then those bugs in the virtual universe is indeed dangerous.
You can kill thousands of people in GTA or Total War without being evil in real life.
The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.
Quote from: hamdani yusuf on 22/09/2019 04:23:45The level of detail can vary, depends on the significance of the object. In google earth, big cities might be zoomed to less than 1 meter per pixel, while deserts or oceans have much coarser detail.We need better detail in the virtual would let's say 20 megapixels to each and every atom.
Is it possible to build a virtual universe?
We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another.
Quote from: hamdani yusuf on 05/07/2021 15:05:06We know there are some efforts already in progress towards that direction. But they are all still partial and mostly independent from one another. I hope it's not too expensive to jump in once they get it up and running. They use to charge 20 cents for a go on space invaders at the arcade centre.
Elon Musk’s brain tech company, Neuralink, is subject to rampant speculation and misunderstanding. Just start a Google search with the phrase “can Neuralink...” and you’ll see the questions that are commonly asked, which include “can Neuralink cure depression?” and “can Neuralink control you?” Musk hasn’t helped ground the company’s reputation in reality with his public statements, including his claim that the Neuralink device will one day enable “AI symbiosis” in which human brains will merge with artificial intelligence.It’s all somewhat absurd, because the Neuralink brain implant is still an experimental device that hasn’t yet gotten approval for even the most basic clinical safety trial. But behind the showmanship and hyperbole, the fact remains that Neuralink is staffed by serious scientists and engineers doing interesting research. The fully implantable brain-machine interface (BMI) they’ve been developing is advancing the field with its super-thin neural “threads” that can snake through brain tissue to pick up signals and its custom chips and electronics that can process data from more than 1000 electrodes.
IEEE Spectrum: Elon Musk often talks about the far-future possibilities of Neuralink; a future in which everyday people could get voluntary brain surgery and have Links implanted to augment their capabilities. But whom is the product for in the near term? Joseph O’Doherty: We’re working on a communication prosthesis that would give back keyboard and mouse control to individuals with paralysis. We’re pushing towards an able-bodied typing rate, which is obviously a tall order. But that’s the goal.We have a very capable device and we’re aware of the various algorithmic techniques that have been used by others. So we can apply best practices engineering to tighten up all the aspects. What it takes to make the BMI is a good recording device, but also real attention to detail in the decoder, because it’s a closed-loop system. You need to have attention to that closed-loop aspect of it for it to be really high performance.We have an internal goal of trying to beat the world record in terms of information rate from the BMI. We’re extremely close to exceeding what, as far as we know, is the best performance. And then there’s an open question: How much further beyond that can we go? My team and I are trying to meet that goal and beat the world record. We’ll either nail down what we can, or, if we can’t, figure out why not, and how to make the device better.
In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning.Titled “Deep Learning for AI,” the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems.
In their paper, Bengio, Hinton, and LeCun acknowledge these shortcomings. “Supervised learning, while successful in a wide variety of tasks, typically requires a large amount of human-labeled data. Similarly, when reinforcement learning is based only on rewards, it requires a very large number of interactions,” they write.
If a virtual universe is ever up and running how will people be able to interact with this technology. Will it be the use of an electrically operated head worn attachment and eye ware that allows us to navigate and communicate throughout the virtual universe.
At first the interface would likely be similar to currently existing human-machine interfaces,