hamdani yusuf and 7 Guests are viewing this topic.
Smartphone Cameras Might Soon Capture Polarization DataNormal cameras can process color and light. New tech from Metalenz collects information that could help your phone better understand the world around you.IMAGINE A CAMERA that's mounted on your car being able to identify black ice on the road, giving you a heads-up before you drive over it. Or a cell phone camera that can tell whether a lesion on your skin is possibly cancerous. Or the ability for Face ID to work even when you have a face mask on. These are all possibilities Metalenz is touting with its new PolarEyes polarization technology.
Reinterpreting Our Brain’s Body MapsSummary: The body relies on multiple maps based on the choice of the motor system.Our brain maps out our body to facilitate accurate motor control; disorders of this body map result in motor deficits. For a century, the body map has been thought to have applied to all types of motor actions. Yet scientists have begun to query how the body map operates when executing different motor actions, such as moving your eyes and hands together.
How Infinite Series Reveal the Unity of MathematicsInfinite sums are among the most underrated yet powerful concepts in mathematics, capable of linking concepts across math’s vast web.
When I was a boy, my dad told me that math is like a tower. One thing builds on the next. Addition builds on numbers. Subtraction builds on addition. And on it goes, ascending through algebra, geometry, trigonometry and calculus, all the way up to “higher math” — an appropriate name for a soaring edifice.But once I learned about infinite series, I could no longer see math as a tower. Nor is it a tree, as another metaphor would have it. Its different parts are not branches that split off and go their separate ways. No — math is a web. All its parts connect to and support each other. No part of math is split off from the rest. It’s a network, a bit like a nervous system — or, better yet, a brain.
DeepMind has created an AI capable of writing code to solve arbitrary problems posed to it, as proven by participating in a coding challenge and placing — well, somewhere in the middle. It won’t be taking any software engineers’ jobs just yet, but it’s promising and may help automate basic tasks.The team at DeepMind, a subsidiary of Alphabet, is aiming to create intelligence in as many forms as it can, and of course these days the task to which many of our great minds are bent is coding. Code is a fusion of language, logic and problem-solving that is both a natural fit for a computer’s capabilities and a tough one to crack.Of course it isn’t the first to attempt something like this: OpenAI has its own Codex natural-language coding project, and it powers both GitHub Copilot and a test from Microsoft to let GPT-3 finish your lines.DeepMind’s paper throws a little friendly shade on the competition in describing why it is going after the domain of competitive coding:QuoteRecent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code.OpenAI may have something to say about that (and we can probably expect a riposte in its next paper on these lines), but as the researchers go on to point out, competitive programming problems generally involve a combination of interpretation and ingenuity that isn’t really on display in existing code AIs.To take on the domain, DeepMind trained a new model using selected GitHub libraries and a collection of coding problems and their solutions. Simply said, but not a trivial build. When it was complete, they put it to work on 10 recent (and needless to say, unseen by the AI) contests from Codeforces, which hosts this kind of competition.
Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code.
Movies. Video games. YouTube videos. All of them work because we accidentally figured out a way to fool your brain’s visual processing system, and you don’t even know it’s happening. In this video, I talk to neuroscientist David Eagleman about the secret illusions that make the moving picture possible.
https://spectrum.ieee.org/neuromorphic-computing-ai-deviceReconfigurable AI Device Shows Brainlike PromiseAn adaptable new device can transform into all the key electric components needed for artificial-intelligence hardware, for potential use in robotics and autonomous systems, a new study finds.Brain-inspired or "neuromorphic" computer hardware aims to mimic the human brain's exceptional ability to adaptively learn from experience and rapidly process information in an extraordinarily energy-efficient manner. These features of the brain are due in large part to its plastic nature—its ability to evolve its structure and function over time through activity such as neuron formation or "neurogenesis."
https://futurism.com/the-byte/openai-already-sentientOPENAI CHIEF SCIENTIST SAYS ADVANCED AI MAY ALREADY BE CONSCIOUS"IT MAY BE THAT TODAY'S LARGE NEURAL NETWORKS ARE SLIGHTLY CONSCIOUS."OpenAI’s top researcher has made a startling claim this week: that artificial intelligence may already be gaining consciousness.Ilya Sutskever, chief scientist of the OpenAI research group, tweeted today that “it may be that today’s large neural networks are slightly conscious.”Needless to say, that’s an unusual point of view. The widely accepted idea among AI researchers is that the tech has made great strides over the past decade, but still falls far short of human intelligence, nevermind being anywhere close to experiencing the world consciously.It’s possible that Sutskever was speaking facetiously, but it’s also conceivable that as the top researcher at one of the foremost AI groups in the world, he’s already looking downrange.
https://futurism.com/mit-researcher-conscious-aiMIT Researcher Says Yes, Advanced Neural Networks May Be Achieving ConsciousnessThis debate just keeps getting spicier.Amid a maelstrom set off by a prominent AI researcher saying that some AI may already be achieving limited consciousness, one MIT AI researcher is saying the concept might not be so far-fetched.Our story starts with Ilya Sutskever, head scientist at the Elon Musk cofounded research group OpenAI. On February 9, Sutskever tweeted that “it may be that today’s large neural networks are slightly conscious.”In response, many others in the AI research space decried the OpenAI scientist’s claim, suggesting that it was harming machine learning’s reputation and amounted to little more than a “sales pitch” for OpenAI work.That backlash has now generated its own clapback from MIT computer scientist Tamay Besiroglu, who’s now bucking the trend by coming to Sutskever’s defense.“Seeing so many prominent [machine learning] folks ridiculing this idea is disappointing,” Besiroglu tweeted. “It makes me less hopeful in the field’s ability to seriously take on some of the profound, weird and important questions that they’ll undoubtedly be faced with over the next few decades.”Besiroglu also pointed to a preprint study in which he and some collaborators found that machine learning models have roughly doubled in intelligence every six months since 2010.Strikingly, Besiroglu drew a line on the on chart of the progress at which, he said, the models may have become “maybe slightly conscious.”
https://futurism.com/human-level-artificial-intelligence-agiWhen Will We Have Artificial Intelligence As Smart as a Human? Here’s What Experts ThinkRobots in the movies can think creatively, continue learning over time, and maybe even pass for conscious. Why don't we have that yet?At The Joint Multi-Conference on Human-Level Artificial Intelligence held last month in Prague, AI experts and thought leaders from around the world shared their goals, hopes, and progress towards human-level AI (HLAI), which is the last stop before true AGI or the same thing, depending on who you ask.Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.
AbstractWe demonstrate that it is possible to perform face-related computer vision in the wild using synthetic data alone.The community has long enjoyed the benefits of synthesizing training data with graphics, but the domain gap between real and synthetic data has remained a problem, especially for human faces. Researchers have tried to bridge this gap with data mixing, domain adaptation, and domain-adversarial training, but we show that it is possible to synthesize data with minimal domain gap, so that models trained on synthetic data generalize to real in-the-wild datasets.We describe how to combine a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism and diversity. We train machine learning systems for face-related tasks such as landmark localization and face parsing, showing that synthetic data can both match real data in accuracy as well as open up new approaches where manual labelling would be impossible.
https://www.protocol.com/enterprise/metaverse-zuckerberg-computing-infrastructureMark Zuckerberg’s metaverse will require computing tech no one knows how to buildTo achieve anything close to what metaverse boosters promise, experts believe nearly every kind of chip will have to be an order of magnitude more powerful than it is today.The technology necessary to power the metaverse doesn’t exist.It will not exist next year. It will not exist in 2026. The technology might not exist in 2032, though it’s likely we will have a few ideas as to how we might eventually design and manufacture chips that could turn Mark Zuckerberg’s fever dreams into reality by then.Over the past six months, a disconnect has formed between the way corporate America is talking about the dawning concept of the metaverse and its plausibility, based on the nature of the computing power that will be necessary to achieve it. To get there will require immense innovation, similar to the multi-decade effort to shrink personal computers to the size of an iPhone.Microsoft hyped its $68.7 billion bid for Activision Blizzard last month as a metaverse play. In October, Facebook transformed its entire corporate identity to revolve around the metaverse. Last year, Disney even promised to build its own version of the metaverse to “allow storytelling without boundaries.”
Zuckerberg’s explanation of what the metaverse will ultimately look like is vague, but includes some of the tropes its boosters roughly agree on: He called it “[an] embodied internet that you’re inside of rather than just looking at” that would offer everything you can already do online and “some things that don’t make sense on the internet today, like dancing.”
If the metaverse sounds vague, that’s because it is. That description could mutate over time to apply to lots of things that might eventually happen in technology. And arguably, something like the metaverse might eventually already exist in an early form produced by video game companies.Roblox and Epic Games’ Fortnite play host to millions — albeit in virtually separated groups of a few hundred people — viewing live concerts online. Microsoft Flight Simulator has created a 2.5 petabyte virtual replica of the world that is updated in real time with flight and weather data.But even today’s most complex metaverse-like video games require a tiny fraction of the processing and networking performance we would need to achieve the vision of a persistent world accessed by billions of people, all at once, across multiple devices, screen formats and in virtual or augmented reality.
https://futurism.com/the-byte/ai-faces-trustworthySCIENTISTS WARN THAT NEW AI-GENERATED FACES ARE SEEN AS MORE TRUSTWORTHY THAN REAL ONESbyTONY TRANAs if the possibility that AI might already be conscious wasn’t creepy enough, researchers have announced that AI-generated faces have become so sophisticated that many people think they’re more trustworthy than actual humans.A pair of researchers discovered that a neural network dubbed StyleGAN2 is capable of creating faces indistinguishable from the real thing, according to a press release from Lancaster University. In fact, in a jarring twist, participants seemed to find AI-generated faces more trustworthy than the faces of actual people.“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces,” the researchers, who will be publishing a paper of their findings in the journal PNAS, said in the release.
Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive—and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.In fact, children sometimes don’t need any examples to identify something. Shown photos of a horse and a rhino, and told a unicorn is something in between, they can recognize the mythical creature in a picture book the first time they see it.Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call “less than one”-shot, or LO-shot, learning. In other words, an AI model should be able to accurately recognize more objects than the number of examples it was trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.How “less than one”-shot learning worksThe researchers first demonstrated this idea while experimenting with the popular computer-vision data set known as MNIST. MNIST, which contains 60,000 training images of handwritten digits from 0 to 9, is often used to test out new ideas in the field.In a previous paper, MIT researchers had introduced a technique to “distill” giant data sets into tiny ones, and as a proof of concept, they had compressed MNIST down to only 10 images. The images weren’t selected from the original data set but carefully engineered and optimized to contain an equivalent amount of information to the full set. As a result, when trained exclusively on the 10 images, an AI model could achieve nearly the same accuracy as one trained on all MNIST’s images.
Researchers have now directly observed what happens inside a brain learning that kind of emotionally charged response. In a new study published in January in the Proceedings of the National Academy of Sciences, a team at the University of Southern California was able to visualize memories forming in the brains of laboratory fish, imaging them under the microscope as they bloomed in beautiful fluorescent greens. From earlier work, they had expected the brain to encode the memory by slightly tweaking its neural architecture. Instead, the researchers were surprised to find a major overhaul in the connections.What they saw reinforces the view that memory is a complex phenomenon involving a hodgepodge of encoding pathways. But it further suggests that the type of memory may be critical to how the brain chooses to encode it — a conclusion that may hint at why some kinds of deeply conditioned traumatic responses are so persistent, and so hard to unlearn.
Researchers have mapped hundreds of semantic categories to the tiny bits of the cortex that represent them in our thoughts and perceptions. What they discovered might change our view of memory.In 2016, neuroscientists mapped how pea-size regions of the cortex respond to hundreds of semantic concepts. They’re now building on that work to understand the relationship between visual, linguistic and memory representations in the brain.A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns.And those differences looked exactly like the ones reported in the studies on vision and memory.The finding, published last October in Nature Neuroscience, suggests that in many cases, a memory isn’t a facsimile of past perceptions that gets replayed. Instead, it is more like a reconstruction of the original experience, based on its semantic content.
https://ai.googleblog.com/2021/09/toward-fast-and-accurate-neural.htmlAs neural network models and training data size grow, training efficiency is becoming an important focus for deep learning. For example, GPT-3 demonstrates remarkable capability in few-shot learning, but it requires weeks of training with thousands of GPUs, making it difficult to retrain or improve. What if, instead, one could design neural networks that were smaller and faster, yet still more accurate?In this post, we introduce two families of models for image recognition that leverage neural architecture search, and a principled design methodology based on model capacity and generalization. The first is EfficientNetV2 (accepted at ICML 2021), which consists of convolutional neural networks that aim for fast training speed for relatively small-scale datasets, such as ImageNet1k (with 1.28 million images). The second family is CoAtNet, which are hybrid models that combine convolution and self-attention, with the goal of achieving higher accuracy on large-scale datasets, such as ImageNet21 (with 13 million images) and JFT (with billions of images). Compared to previous results, our models are 4-10x faster while achieving new state-of-the-art 90.88% top-1 accuracy on the well-established ImageNet dataset. We are also releasing the source code and pretrained models on the Google AutoML github.We observe two key insights from our study: (1) depthwise convolution and self-attention can be naturally unified via simple relative attention, and (2) vertically stacking convolution layers and attention layers in a way that considers their capacity and computation required in each stage (resolution) is surprisingly effective in improving generalization, capacity and efficiency. Based on these insights, we have developed a family of hybrid models with both convolution and attention, named CoAtNets (pronounced “coat” nets). The following figure shows the overall CoAtNet network architecture:Overall CoAtNet architecture. Given an input image with size HxW, we first apply convolutions in the first stem stage (S0) and reduce the size to H/2 x W/2. The size continues to reduce with each stage. Ln refers to the number of layers. Then, the early two stages (S1 and S2) mainly adopt MBConv building blocks consisting of depthwise convolution. The later two stages (S3 and S4) mainly adopt Transformer blocks with relative self-attention. Unlike the previous Transformer blocks in ViT, here we use pooling between stages, similar to Funnel Transformer. Finally, we apply a classification head to generate class prediction.Conclusion and Future WorkIn this post, we introduce two families of neural networks, named EfficientNetV2 and CoAtNet, which achieve state-of-the-art performance on image recognition. All EfficientNetV2 models are open sourced and the pretrained models are also available on the TFhub. CoAtNet models will also be open-sourced soon. We hope these new neural networks can benefit the research community and the industry. In the future we plan to further optimize these models and apply them to new tasks, such as zero-shot learning and self-supervised learning, which often require fast models with high capacity.
Every browser knows how to load a .jpg, a .gif, a .png, and more. The formats the data exists in are agreed upon. If you point a browser at some data in a format it doesn’t understand, it’s going to fail to load and draw the image, just like you can’t expect Instagram to know how to display an .stl meant for 3d printing.This is a crucial concept, which is going to come up again and again in these articles: data doesn’t exist in isolation. A vinyl record and a CD might both have the same music on them, but a record player can’t handle a CD and a vinyl record doesn’t fit into the slot on a CD player (don’t try, you will regret it).Anytime you see data, you need to think of three things: the actual content, the format it is in, and the “machine” that can recognize that format. You can think of the format as the “rules” the data needs to follow in order for the machine to read it.The thing about formats is that they need to be standardized. They’re agreed upon by committees, usually. And committees are slow and political… and of course, different members might have very different opinions on what needs to be in the standard – and for good reasons!One of the common daydreams for metaverses is that a player should be able to take their avatar from one world to another. But… what format avatar? A Nintendo Mii and a Facebook profile picture and an EVE Online character and a Final Fantasy XIV character don’t just look different. They are different. FFXIV and World of Warcraft are fairly similar games in a lot of ways, but the list of equipment slots, possible customizations, and so on are hugely different. These games cannot load each other’s characters because they do not agree on what a character is.
At Tesla's AI day, Ashok Elluswamy, Director of Autopilot Software, went into detail about the problems that Tesla has getting a car to navigate a parking lot and find an open parking space. Why is this so difficult? What does it have to do with computer vision? And why is auto parking, auto summon, and reverse summon so critical to Tesla's robotaxi ambitions?!
https://otec.uoregon.edu/data-wisdom.htmComputers are often called data processing machines or information processing machines. People understand and accept the fact that computers are machines designed for the input, storage, processing, and output of data and informationHowever, some people also think of computers as knowledge processing machines and even explore what it might mean for a computer to have wisdom. For example, here is a quote from Dr. Yogesh Malhotra of the BRINT Institute:Knowledge Management caters to the critical issues of organizational adaption, survival and competence in face of increasingly discontinuous environmental change.... Essentially, it embodies organizational processes that seek synergistic combination of data and information processing capacity of information technologies, and the creative and innovative capacity of human beings.The following quotation is from the Atlantic Canada Conservation Data Centre, a non-profit organization established in 1999.Individual bits or "bytes" of "raw" biological data (e.g. the number of individual plants of a given species at a given location) do not by themselves inform the human mind. However, drawing various data together within an appropriate context yields information that may be useful (e.g. the distribution and abundance of the plant species at various points in space and time). In turn, this information helps foster the quality of knowing (e.g. whether the plant species is increasing or decreasing in distribution and abundance over space and time). Knowledge and experience blend to become wisdom--the power of applying these attributes critically or practically to make decisions.Thus, we are led to think about Data, Information, Knowledge, and Wisdom as we explore the capabilities and limitations of IT systems