0 Members and 1 Guest are viewing this topic.
I/have wasted my life/time trying to explain all those paradoxes away, classically,only to realise in my old age/the end that it can't be done.
Physicists have long suspected that quantum mechanics allows two observers to experience different, conflicting realities. Now they’ve performed the first experiment that proves it.
The idea that observers can ultimately reconcile their measurements of some kind of fundamental reality is based on several assumptions. The first is that universal facts actually exist and that observers can agree on them.But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.If there is an objective reality that everyone can agree on, then these assumptions all hold.But Proietti and co’s result suggests that objective reality does not exist. In other words, the experiment suggests that one or more of the assumptions—the idea that there is a reality we can agree on, the idea that we have freedom of choice, or the idea of locality—must be wrong.Of course, there is another way out for those hanging on to the conventional view of reality. This is that there is some other loophole that the experimenters have overlooked. Indeed, physicists have tried to close loopholes in similar experiments for years, although they concede that it may never be possible to close them all.Nevertheless, the work has important implications for the work of scientists. “The scientific method relies on facts, established through repeated measurements and agreed upon universally, independently of who observed them,” say Proietti and co. And yet in the same paper, they undermine this idea, perhaps fatally.
U may care to see (The Incredible) Halc's outstanding Best Answer to my Mach-Zehnder interferometer question,and our no holds barred wrestling contest/match afterward/s. TIH vs TCC! I think I gave just as good as I got.
During Tesla's AI day, Andrej Karpathy, director of AI and autopilot vision at Tesla, went into a great deal of detail about how and why Tesla engineers have expended massive effort to transform video images from Tesla cameras into abstracted vector spaces. The way they achieved this, and the results, are astounding. From Hydranets to Transformers, to conversion to vector space, Karpathy explained how Tesla vision full self driving takes images from the cameras and converts them to a depth sorted 2D top down map of the surroundings--all in real time!
During Tesla AI day on August 19th, Ashok Elluswamy, Tesla’s director of autopilot software, demonstrated that Teslas driving the FSD (Full Self Driving) beta 9 have an almost eerie ability to plan ahead for issues that might arise while driving. Some of this comes down to basic physics--knowing how heavy and how big your "ego" car is--but a lot of you Telsa's ability to plan comes down to the car route planning... for all the other agents in the scene (other cars, pedestrians, bikes, etc). This is crazy--and it got me thinking about a book by Christopher McDougall, Born to Run, which posits that human consciousness arose on the plains of Africa as early humanoids had to place an agent model (a version of their own brains) into that of their hunting companions and the target prey. But wait, you say, this is just what a Tesla is doing when it route plans. Might your Tesla actually be conscious?!
Forget about online games that promise you a "whole world" to explore. An international team of researchers has generated an entire virtual universe, and made it freely available on the cloud to everyone.Uchuu (meaning "outer space" in Japanese) is the largest and most realistic simulation of the universe to date. The Uchuu simulation consists of 2.1 trillion particles in a computational cube an unprecedented 9.63 billion light-years to a side. For comparison, that's about three-quarters the distance between Earth and the most distant observed galaxies. Uchuu reveals the evolution of the universe on a level of both size and detail inconceivable until now.
On August 19th, during Tesla AI day, Andrej Karpathy, director of artificial intelligence and autopilot vision, dove into a topic that is distinctly not sexy, but absolutely necessary for modern machine learning: collecting and especially labeling data for training. After covering how Tesla Vision converts 2D images into 3D vector space, and discussing how the cars can plan ahead not just for them, but for all other agents in the scene (you can watch my previous videos, linked above, for much more on this), Dr. Karpathy broached the topic of how Tesla deals with the mountains of data it’s 2 million car strong fleet produces now. And while I thought I’d be bored by this section of the talk, I was, frankly, blown away by how brilliant Tesla’s data labeling strategy is, and also how much time, person power, and money Tesla has and is putting into labelling the best, most targeted data possible. Along with the incredible neural network architecture, this data labeling is what is enabling Tesla to achieve what seemed impossible just a short time ago: full autonomous driving using only cameras!
OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That’s why OpenAI took over this quest. To ensure it’d benefit everyone evenly: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
The holy trinity — Algorithms, data, and computersOpenAI believes in the scaling hypothesis. Given a scalable algorithm, the transformer in this case — the basic architecture behind the GPT family —, there could be a straightforward path to AGI that consists of training increasingly larger models based on this algorithm.But large models are just one piece of the AGI puzzle. Training them requires large datasets and large amounts of computing power.Data stopped being a bottleneck when the machine learning community started to unveil the potential of unsupervised learning. That, together with generative language models, and few-shot task transfer, solved the “large datasets” problem for OpenAI.They only needed huge computational resources to train and deploy their models and they’d be good to go. That’s why they partnered with Microsoft in 2019. They licensed the big tech company so they could use some of OpenAI’s models commercially in exchange for access to its cloud computing infrastructure and the powerful GPUs they needed.
What can we expect from GPT-4?100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.GPT-4 will have as many parameters as the brain has synapses.
OpenAI has been working nonstop in exploiting GPT-3’s hidden abilities. DALL·E was a special case of GPT-3, very much like Codex. But they aren’t absolute improvements, more like particular cases. GPT-4 promises more. It promises the depth of specialist systems like DALL·E (text-images) and Codex (coding) combined with the width of generalist systems like GPT-3 (general language).And what about other human-like features, like reasoning or common sense? In that regard, Sam Altman says they’re not sure but he remains “optimistic.”
G'duy, neighbour. I'm from Oz/tralia. The fabled land of Oz.Thank U for Ur MZI replies. U look to me like an electrical engineer, type.Both Ur names are Islamic. I assume U're a good Muslim/believer.Didn't U ever wonder how something like that can be implemented? Only in SW.
"There is no classical explanation, so the universe is a simulation".
NO! There is/are no classical explanation/s, for quantum paradoxes/phenomena.
But/t there is an explanation and it's a SW based universe/cosmos.
100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.GPT-4 will have as many parameters as the brain has synapses.
Our mushy brains seem a far cry from the solid silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing put it in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the medium doesn’t matter, only the computational ability.Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.
“We tried many, many architectures with many depths and many things, and mostly failed,” said London. The authors have shared their code to encourage other researchers to find a clever solution with fewer layers. But, given how difficult it was to find a deep neural network that could imitate the neuron with 99% accuracy, the authors are confident that their result does provide a meaningful comparison for further research. Lillicrap suggested it might offer a new way to relate image classification networks, which often require upward of 50 layers, to the brain. If each biological neuron is like a five-layer artificial neural network, then perhaps an image classification network with 50 layers is equivalent to 10 real neurons in a biological network.
People often say that Newtonian mechanics is classical physics. So is Maxwellian electromagnetic theory. But they are incompatible with each other.
Have U heard about the Quantum Eraser?Either the photons (can) travel back in time or the universe is implemented in SW
Most simulations work by remembering the state of everything and then computing some future state at some small increment of time. This means choosing a quantum interpretation that has actual state, but such interpretations only work with reverse causality, meaning that you might have simulated the last billion years of physics, but some decision made just now has changed what happened a billion years ago, invalidating everything that has happened since (and yes, they've done experiments that apparently reach at least that far back).