0 Members and 1 Guest are viewing this topic.
Artificial intelligence powers protein-folding predictionsDeep-learning algorithms such as AlphaFold2 and RoseTTAFold can now predict a protein’s 3D shape from its linear sequence — a huge boon to structural biologists.https://www.nature.com/articles/d41586-021-03499-yProtein designers could also see benefits. Starting from scratch — called de novo protein design — involves models that are generated computationally but tested in the lab. “Now you can just immediately use AlphaFold2 to fold it,” says Zhang. These results can even be used to retrain the design algorithms to produce more-accurate results in future experiments.
(CNN)The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals.Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal."Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study."In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."https://www.cnn.com/2021/11/29/americas/xenobots-self-replicating-robots-scn/index.html
Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.https://www.newyorker.com/magazine/2021/12/06/the-science-of-mind-reading
Biotechnology/Nanotechnology | Andrew Hessel | SingularityU Germany Summit 2017Andrew Hessel is a futurist and catalyst in biological technologies, helping industry, academics, and authorities better understand the changes ahead in life science. He is a Distinguished Researcher with Autodesk Inc. Bio/Nano Programmable Matter group, based out of San Francisco. He is also the co-founder of the Pink Army Cooperative, the world first cooperative biotechnology company, which is aiming to make open source viral therapies for cancer.
Everyone thinks of China's social credit system as some sort of Black Mirror episode, while others compare it the the FICO score in the USA. It's much, much more than that. In fact, I found the documents that highlight how it works, and how it affects the people of China. Not only that, but I have spent a lot of time in the first city it was implemented.Keep in mind, this is how the social credit system in China works, but it hasn't been implemented nationwide yet, only in selected areas.
Digital twins were once a technology of the future. Now companies are lining up to implement them so they can solve real-world problems with virtual simulations. Is it easier said than done?Futurist Bernard Marr described a digital twin as "an exact digital replica of something in the physical world; digital twins are made possible thanks to Internet of Things sensors that gather data from the physical world and send it to machines to reconstruct." Unstructured data, such as IoT technology, have made digital twins possible—and these digital twins are able to solve real-world problems in virtual universes.An example Marr offered is the city of Singapore, which does most of its city planning by using a virtual replica of its physical city. In another example, a supermarket in France created a digital twin of a brick-and-mortar store based on data from IoT-enabled shelves and sales systems. The result is that store managers can easily manage inventory and test the effectiveness of different store layouts in digital twin simulations. Digital twins can be impressive, but It isn't easy to build one. Each twin is a vast complex of data drawn from IT assets throughout and outside of the enterprise. This data is then applied to an operational digital twin model developed by IT and operations specialists.
This is the moment I've been waiting for in computing graphics. In this episode, we cover the playable matrix awakens demo as well as some other unreal engine 5 info.
Human brain cells in a dish learn to play Pong faster than an AIHundreds of thousands of brain cells in a dish are being taught to play Pong by responding to pulses of electricity – and can improve their performance more quickly than an AI can.Living brain cells in a dish can learn to play the video game Pong when they are placed in what researchers describe as a “virtual game world”. “We think it’s fair to call them cyborg brains,” says Brett Kagan, chief scientific officer of Cortical Labs, who leads the research.Many teams around the world have been studying networks of neurons in dishes, often growing them into brain-like organoids. But this is the first time that mini-brains have been found to perform goal-directed tasks, says Kagan.
Summary: Researchers have identified a neural mechanism that supports advanced cognitive functions such as planning and problem-solving. The mechanism distributes information from a single neuron to larger neural populations in the prefrontal cortex.Source: Mount Sinai Hosptial
Mount Sinai scientists have discovered a neural mechanism that is believed to support advanced cognitive abilities such as planning and problem-solving. It does so by distributing information from single neurons to larger populations of neurons in the prefrontal cortex, the area of the brain that temporarily stores and manipulates information.
A 62-year-old Australian man paralyzed following his diagnosis with amyotrophic lateral sclerosis (ALS) has become the first individual to send out a message on social media using a brain-computer interface, RT reported. Brain-computer interfaces (BCI) are the next big thing in technology. While some people like Elon Musk want to use it to enhance human experiences as early as next year, others such as Synchron, whose interface helped Australian Philip O'Keefe send out his first tweet, want to develop it as a prosthesis for paralysis and treat other neurological diseases such as Parkinson's disease in the future, the company said in a press release. Synchron's BCI works through its brain implant called Stentrode that does not require any brain surgery to be installed. Instead, the company leverages the intentional techniques that are commonly used to treat stroke to implant the Stentrode via the jugular vein, the press release said.https://twitter.com/tomoxl/status/1473809025254846467?s=20
Imagine commanding a computer or playing a game without using your fingers, voice, or eyes. It sounds like science fiction, but it’s becoming a little more real every day thanks to a handful of companies making tech that detects neural activity and converts those measurements into signals computers can read. One of those companies — NextMind — has been shipping its version of the mind-reading technology to developers for over a year. First unveiled at CES in Las Vegas, the company’s neural interface is a black circle that can read brain waves when strapped to the back of a user’s head. The device isn’t quite yet ready for primetime, but it’s bound to make its way into consumer goods sooner rather than later. Neural interfaces are already hereNeural interfaces have the potential to support a wide range of activities in a variety of settings. A company called Mudra, for example, has developed a band for the Apple Watch that enables users to interact with the device by simply moving their fingers — or think about moving their fingers. That means someone with the device can navigate music or place calls without having to interrupt whatever they’re doing at the time. It also opens tremendous opportunities for making tech available to people with disabilities who have trouble with other user interfaces.
ARTIFICIAL INTELLIGENCE could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people.Here are seven examples of AI failures and what current weaknesses they reveal about artificial intelligence. Scientists discuss possible ways to deal with some of these problems; others currently defy explanation or may, philosophically speaking, lack any conclusive solution altogether.
On November 29, CNN reported that scientists claimed the world's first living robots were now able to reproduce. But what sounds like the start of a dystopian nightmare future turns out to be a lot less worrying at a closer look.Article with more information on Xenobots here:https://www.pnas.org/content/118/49/e2112672118
Your never ending threads become so tiring. Blessed be the thread ignore button...
For a few years now, Musk has been pushing the idea that Tesla is the world’s leading company when it comes to real-world applications of artificial intelligence.He describes Tesla’s fleet of vehicles equipped with sensors and computers for self-driving as “robots on wheels.”Through this “real-world application,” the company has also been able to attract world-class AI talent, and Musk boasts that Tesla has the best AI team on the planet.At Tesla’s AI day last year, the automaker unveiled its latest supercomputer, Dojo, to train its neural nets.It also announced that it plans to build a ‘Tesla Bot,’ a humanoid robot meant to do general tasks and repetitive work.Now Musk took to Twitter this morning to announce that Tesla might go a step further and get involved in Artificial General Intelligence (AGI):“Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus.”Optimus, or Optimus Subprime, is the codename that Musk gave to the Tesla Bot project.This is somewhat surprising considering the many warnings that Musk has issued about creating AGI and the risks to humanity that come with it.Along with the announcement that Tesla might work on AGI, Musk also added on Twitter that Tesla will make sure to “decentralize” control of Tesla Bots:“Will do our best. Decentralized control of the robots will be critical.”The comment was made in response to someone mentioning “summoning the demon,” which is what Musk referred to as creating an AGI that would turn against humanity.Decentralizing the control of Tesla Bots would avoid giving this “demon” access to an army – much like a Terminator-like scenario.