0 Members and 1 Guest are viewing this topic.
https://bigthink.com/neuropsych/chess-theory-of-mind-manipulation/The greatest tacticians of the world are those who think ahead. Chess grandmasters, famous generals, great world leaders, and mafia dons all share one skill: They are all many more steps ahead than their rivals.We each have the ability to think ahead. In fact, it’s hard to imagine a functioning human who didn’t think ahead at least some of the time. You’ve probably planned what to do tonight, and you likely know the route you’re going to take to get home. Thinking ahead is one hallmark of intelligence. Without it, we’re simply slaves to our instincts and reflexes — a bit like a plant or a baby.What about the role of forward thinking when dealing with others? It’s something addressed in a recent study out of the Mount Sinai School of Medicine. It shows just how far ahead we think when we interact with — and manipulate — other people.The problem with the world is that it’s full of other people. Unlike you (of course!), those people are often unpredictable, independent, and infuriatingly unreadable. There’s no way we can get inside their head to know what they are thinking or what they are going to do. But given that humans are a social species, it is no surprise that we have developed ways to calculate what other people might be thinking.This is known as “theory of mind,” the ability most of us have to put ourselves in someone else’s shoes. (To varying degrees, people with autism may not have this ability.) Theory of mind is something that we learn as we grow up. Children will learn other people have their own mental lives — their own desires, emotions, and so on — around 15 months old, but they are still bad at compensating and adapting to that for a while. For instance, if a two-year-old sees another person in distress, they will seek to help them by giving them their toy or their favorite thing. They recognize someone has their own feelings but cannot step beyond that to think what the other person might want.
This AI says it's conscious and experts are starting to believe itI used GPT-3 and a Synthesia avatar. All answers are by GPT-3 (except the brief joke at the end).
Last week, Google put one of its engineers on administrative leave after he claimed to have encountered machine sentience on a dialogue agent named LaMDA. Because machine sentience is a staple of the movies, and because the dream of artificial personhood is as old as science itself, the story went viral, gathering far more attention than pretty much any story about natural-language processing (NLP) has ever received. That’s a shame. The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the silly fantasy of machine sentience has once again been allowed to dominate the artificial-intelligence conversation when much stranger and richer, and more potentially dangerous and beautiful, developments are under way.The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.
https://www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-academic-paper-about-itself-then-we-tried-to-get-it-published/On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn’t have any high expectations: I’m a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn’t my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement. The algorithm was writing an academic paper about itself.My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work.
https://www.yahoo.com/news/researchers-china-claim-developed-mind-162107224.htmlResearchers in China claim they have developed 'mind-reading' artificial intelligence that can measure loyalty to the Chinese Communist Party, reports sayResearchers in China claim they have developed "mind-reading" AI, multiple outlets have reported.In a now-deleted video, they reportedly said the software could be used to measure party loyalty.Last year, the US sanctioned 11 Chinese institutes for developing "purported brain-control weaponry."Researchers at China's Comprehensive National Science Center in Hefei claimed to have developed "mind-reading" artificial intelligence capable of measuring citizens' loyalty to the Chinese Communist Party (CCP), The Sunday Times UK first reported.In a now-deleted video and article, the institute said the software could measure party members' reactions to "thought and political education" by analyzing facial expressions and brain waves, according to The Times.The results can then be used to "further solidify their confidence and determination to be grateful to the party, listen to the party, and follow the party," the researchers said, per the report. The post was taken down following public outcry from Chinese citizens, according to a VOA article published Saturday.Dr. Lance B. Eliot, an AI and machine learning expert, wrote in Forbes last week that without knowing the specifics of the research study, it's impossible to prove the validity of the institute's claims."This is certainly not the very first time that a brainwave scan capability was used on human subjects in a research effort," he wrote. "That being said, using them to gauge loyalty to the CCP is not something you would find much focus on. When such AI is used for governmental control, a red line has been crossed."
Sometimes, it feels like the best way forward on a project is to throw everything out & start over from scratch, but Joel Spolsky is adamant that this is a terrible idea, & there’s decent evidence that he’s right.
Attention is all Tesla Needs: TRANSFORMERS, AI, and FSD Beta!Andrej Karpathy has spoken of Tesla FSD Beta depending more and more on Transformers, a new Deep Neural Network architecture that has taken the AI world by storm. From OpenAI's GPT-3 and Dall-e 2, to Google's Imagen, and many others, Transformers are truly transforming the world of AI and Machine Learning. But what the heck are Transformers and how do they work? In this geeky deep dive we'll figure that out!
The Unreasonable Effectiveness of JPEG: A Signal Processing ApproachChapters:00:00 Introducing JPEG and RGB Representation2:15 Lossy Compression3:41 What information can we get rid of?4:36 Introducing YCbCr6:10 Chroma subsampling/downsampling8:10 Images represented as signals9:52 Introducing the Discrete Cosine Transform (DCT)11:32 Sampling cosine waves12:43 Playing around with the DCT17:38 Mathematically defining the DCT21:02 The Inverse DCT22:45 The 2D DCT23:49 Visualizing the 2D DCT24:35 Introducing Energy Compaction26:05 Brilliant Sponsorship27:23 Building an image from the 2D DCT28:20 Quantization30:23 Run-length/Huffman Encoding within JPEG32:56 How JPEG fits into the big picture of data compressionThe JPEG algorithm is rather complex and in this video, we break down the core parts of the algorithm, specifically color spaces, YCbCr, chroma subsampling, the discrete cosine transform, quantization, and lossless encoding. The majority of the focus is on the mathematical and signal processing insights that lead to advancements in image compression and the big themes in compression as a whole that we can take away from it.
A new Columbia University AI program observed physical phenomena and uncovered relevant variables—a necessary precursor to any physics theory. But the variables it discovered were unexpected....A particularly interesting question was whether the set of variables was unique for every system, or whether a different set was produced each time the program was restarted. “I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way?” said Lipson. “Perhaps some phenomena seem enigmatically complex because we are trying to understand them using the wrong set of variables.”Lipson, who is also the James and Sally Scapa Professor of Innovation, argues that scientists may be misinterpreting or failing to understand many phenomena simply because they don’t have a good set of variables to describe the phenomena. “For millennia, people knew about objects moving quickly or slowly, but it was only when the notion of velocity and acceleration was formally quantified that Newton could discover his famous law of motion F=MA,” Lipson noted. Variables describing temperature and pressure needed to be identified before laws of thermodynamics could be formalized, and so on for every corner of the scientific world. The variables are a precursor to any theory. “What other laws are we missing simply because we don’t have the variables?” asked Du, who co-led the work.
The auto-regressive attention at the heart of the Transformer, and programs like it, becomes a scaling nightmare. A recent DeepMind/Google work proposes a way to put such programs on a diet.One of the alarming aspects of the incredibly popular deep learning segment of artificial intelligence is the ever-larger size of the programs. Experts in the field say computing tasks are destined to get bigger and biggest because scale matters. That's why it's interesting any time that the term efficiency is brought up, as in, Can we make this AI program more efficient?Scientists at DeepMind, and at Google's Brain division, recently adapted a neural network they introduced last year, Perceiver, to make it more efficient in terms of its computer power requirement. The new program, Perceiver AR, is named for the "autoregressive" aspect of an increasing number of deep learning programs. Autoregression is a technique for having a machine use its outputs as new inputs to the program, a recursive operation that forms an attention map of how multiple elements relate to one another. The innovation of the original perceiver was to take the Transformer and tweak it to let it consume all kinds of input, including text sound and images, in a flexible form, rather than being limited to a specific kind of input, for which separate kinds of neural networks are usually developed.The problem is, the auto-regressive quality of the Transformer, and any other program that builds an attention map from input to output, is that it requires tremendous scale in terms of the a distribution over hundreds of thousands of elements. That is the Achilles Heel of attention, the need, precisely, to attend to anything and everything in order assemble the probability distribution that makes for the attention map.
There is a tension between this kind of long-form, contextual structure and the computational properties of Transformers. Transformers repeatedly apply a self-attention operation to their inputs: this leads to computational requirements that simultaneously grow quadratically with input length and linearly with model depth. As the input data grows longer, more input tokens are needed to observe it, and as the pat- terns in the input data grow more subtle and complicated, more depth is needed to model the patterns that result. Computational constraints force users of Transformers to either truncate the inputs to the model (preventing it from observ- ing many kinds of long-range patterns) or restrict the depth of the model (denuding it of the expressive power needed to model complex patterns).
It’s August 2022, and by now you’ve no doubt read (or more likely seen) something about AI art. Whether it’s random jokes made for Twitter or paintings that look like they were made by actual human beings, artificial intelligence’s ability to create art has exploded onto the scene over the last few months, and while this has been great news for shitposts and fans of tech, it has also raised a number of important questions and concerns.If you haven’t read or seen anything about the subject, AI art—at least as it exists in the state we know it today—is, as Ahmed Elgammal writing in American Scientist so neatly puts it, made when “artists write algorithms not to follow a set of rules, but to ‘learn’ a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.”From a user’s perspective, this is most often done by entering a text prompt, so you can type something like “wizard standing on a hillside under a rainbow”, and an AI will attempt to give you a fairly decent approximation of that in image form. You could also type “Spongebob grieving for Batman’s parents” and you’ll get something just as close to what you’re thinking.Basically, we now live in a world where machines have been fed millions upon millions of pieces of human endeavour, and are now using the cumulative data they’ve amassed to create their own works. This has been fun for casual users and interesting for tech enthusiasts, sure, but it has also created an ethical and copyright black hole, where everyone from artists to lawyers to engineers has very strong opinions on what this all means, for their jobs and for the nature of art itself.
“These platforms are washing machines of intellectual property”Simply put, as we often see with technology that has advanced faster than the law can keep up, there is no definitive, binding stance on the copyright issues at the heart of machines chewing up human art then spitting out artificial compilations of what they’ve learned.
End of human labor.
Quote from: hamdani yusuf on 02/10/2022 11:49:21End of human labor.His robot is very ugly (In my opinion)And it cant really walk (it do not use the dynamic of the fall) like the Boston Dynamics robots (these ones are very impressive, look : //www.youtube.com/watch?v=TnGlZ2z1jsIBut it is cheap so it could effectivly be used at great scale (around 22000 euros sayed Elon Musk) for some specific tasks.