0 Members and 2 Guests are viewing this topic.
Even if it were finite, you can't simulate it completely because the simulation would then be part of the universe, leading to an infinite recursion.
Wavelet transform is an invaluable tool in signal processing, which has applications in a variety of fields - from hydrodynamics to neuroscience. This revolutionary method allows us to uncover structures, which are present in the signal but are hidden behind the noise. The key feature of wavelet transform is that it performs function decomposition in both time and frequency domains. In this video we will see how to build a wavelet toolkit step by step and discuss important implications and prerequisites along the way.This is my entry for Summer of Math Exposition 2022 ( #SoME2 ).My name is Artem, I'm a computational neuroscience student and researcher at Moscow State University. Twitter: @artemkrsvOUTLINE:00:00 Introduction01:55 Time and frequency domains03:27 Fourier Transform05:08 Limitations of Fourier08:45 Wavelets - localized functions10:34 Mathematical requirements for wavelets12:17 Real Morlet wavelet 13:02 Wavelet transform overview14:08 Mother wavelet modifications15:46 Computing local similarity18:08 Dot product of functions?21:07 Convolution24:55 Complex numbers27:56 Wavelet scalogram 30:46 Uncertainty & Heisenberg boxes33:16 Recap and conclusion
DrEureka might signal the start of a transition, from humans training robots, to machines teaching machines. Nvidia have demonstrated how LLMs can have immense impacts, even with their flaws. This video is about one paper, one concept ... and it's a genius one.
OpenAI held its most recent product update event and revealed a new ChatGPT-4o AI model and desktop version for its ChatGPT software.0:00 Intro0:21 ChatGPT Desktop App0:52 ChatGPT-4o1:17 ChatGPT Vision1:31 ChatGPT's Voice Can Generate Emotive Styles1:58 ChatGPT-4o Real-Time Language Translation2:20 ChatGPT-4o Can Read Emotions
OpenAI just released their new flagship model called GPT-4o, plus they announced a new desktop app and massive updates to their phone app. Here is everything you need to know.Links:https://openai.com/index/spring-update/https://platform.openai.com/playground---------------------------------------------------------------------------------0:00 Start0:39 GPT-4o Details3:35 Free Upgrades for All!6:14 How to Use GPT-4o9:02 AI Advantage Community
From new AI features to Android updates, Google had a lot to share during this year?s I/O. We got a look at Google?s Project Astra, a multimodal AI assistant that the company hopes will become a do-everything virtual assistant. And Veo: Google?s answer to OpenAI?s Sora, a new generative AI model that can output 1080p video. Plus, a look at Gemini 1.5 Flash, the new, lighter than Gemini Pro, multimodal model optimized for ?narrow, high-frequency, low-latency tasks." Also, using on-device Gemini Nano AI smarts, Google says Android phones will be able to help you avoid scam calls by looking out for red flags. Here?s everything you missed. #Google #GoogleIO #Technology 0:00 Intro 00:22 Ask Photos Search01:20 Gemini 1.5 Pro02:16 Notebook LM03:26 Gemini 1.5 Flash03:43 Project Astra04:47 Imagen 305:08 Music AI Sandbox05:20 Veo AI Generated Video06:15 6th Gen TPUs Trillium06:37 Multi-step Reasoning in Google Search07:35 Ask with Video08:12 New Gmail Mobile Features10:10 Gemini Teammate10:45 Gemini powered teammate called Chip11:10 Gemini Live (video AI)11:59 Trip planning with Gemini11:46 Gemini Advanced & Circle to Search13:53 Contextual awareness14:58 - Talk back & Gemini Nano15:26 - Gemini Pro and Flash prices15:50 - PaliGemma and Gemma2 availability16:05 - SynthID16:19 - LearnLM16:27 - Gems16:38 - How many times was AI said?
Aha! I now recognise that we have a subtly different interpretation of virtual universe. Any system that manipulates data rather than real stuff is working in a virtual universe, the value of which depends on how that maps to the chosen part of the real universe. AI, as currently implemented, just happens to be a bit less choosy than a human about the source, credibility and relevance of its data.
GenAI is just the beginning, what comes next is AI Agent.When Andrew Ng, Andrej Karpathy speak, we should listen ! Simple.
📝 Check out AlphaFold 3 here:https://dpmd.ai/yt-tmp-alphafold3📝 Or try it out through AlphaFold server for free:https://alphafoldserver.com/This video was made in partnership with Google DeepMind.My paper on simulations that look almost like reality is available for free here:https://rdcu.be/cWPfD
Results of Neuralink's first human trial have been revealed! There's good news and bad news... And a lot more still to come.
Results of Neuralink's first human trial have been revealed! There's good news and bad news
This conversation from Davos is about what the potential limits of generating AI can or should be, how far away we are from other transformative advances in AI, and what we even mean when we say say ?generative AI?.0:00 Introduction0:33 The Current Challenges With AI2:44 When Will Cognitive AI For Humans Be Available?6:51 How To Develop Better AI Models11:39 Concerns With AI: What Is The Worse That Can Happen?17:58 Other Social Technologies On The Rise
One should be very afraid of anyone who uses AI as an excuse for not accepting liability for his actions.
The tech industry's obsession with AI is hitting a major limitation - power consumption. Training and using AI models is proving to be extremely energy intensive. A single GPT-4 request consumes as much energy as charging 60 iPhones, 1000x more than a traditional Google search. By 2027, global AI processing could consume as much energy as the entire country of Sweden. In contrast, the human brain is far more efficient, with 17 hours of intense thought using the same energy as one GPT-4 request. This has spurred a race to develop AI that more closely mimics biological neural systems.The high power usage stems from how artificial neural networks (ANNs) are structured with input, hidden, and output layers of interconnected nodes. Information flows forward through the network, which is trained using backpropagation to adjust weights and biases to minimize output errors. ANNs require massive computation, with the GPT-3 language model having 175 billion parameters. Training GPT-3 consumed 220 MWh of energy. To improve efficiency, research is shifting to spiking neural networks (SNNs) that communicate through discrete spikes like biological neurons. SNNs only generate spikes when needed, greatly reducing energy use compared to ANNs constantly recalculating. SNN neurons have membrane potentials that trigger spikes when a threshold is exceeded, with refractory periods between spikes. This allows SNNs to produce dynamic, event-driven outputs. However, SNNs are difficult to train with standard ANN methods.SNNs perform poorly on traditional computer architectures. Instead, neuromorphic computing devices are being developed that recreate biological neuron properties in hardware. These use analog processing in components like memristors and spintronic devices to achieve neuron-like behavior with low power. Early neuromorphic chips from IBM and Intel have supported millions of simulated neurons with 50-100x better energy efficiency than GPUs. As of 2024, no commercially available analog AI chips exist, but a hybrid analog-digital future for ultra-efficient AI hardware seems imminent. This could enable revolutionary advances in fields like robotics and autonomous systems in the coming years.
Backside Power Delivery promises huge efficiency and performance advantages for modern computer chips, but also changes the semiconductor manufacturing process. Let's go an a deep-dive into Intel's PowerVia technology.0:00 Intro0:55 Current semiconductor manufacturing3:27 The problem with the frontside silicon & metal layers7:35 Backside Power Delivery manufacturing11:06 Advantages of BSPD / Intel PowerVia / Blue Sky Creek14:24 Design-Technology Co-Optimization / cell area scaling15:54 The Future of Semiconductor manufacturing