0 Members and 1 Guest are viewing this topic.
Lumiere is a text-to-video AI model from Google that uses space and time to create realistic videos. It's a time-space diffusion model that transforms images and text into AI-generated videos. Lumiere can create video clips up to five seconds long and can animate still images.
My favorite moments of the day
Google Gemini Ultra is finally out: the most powerful iteration of the tech giant?s latest AI. Expected to compete with the ChatGPT and beat it, the most advanced version of Gemini is said to be the pinnacle of the current generation of artificial intelligence. But is it? Here?s Google Gemini Ultra, explained in 2 minutes.00:00 Intro 0:40 How to access 1:06 Features1:58 Final thoughts
In this video, Dr. Michael Chua discusses Neuralink's bionic vision brain implants and why this technology has the potential to change humanity forever.0:00 Introduction1:18 What are brain-computer interfaces?6:20 Second Sight's Argus Implant12:11 Neuralink
Some sceptics said that cars will never drive themselves.
Elon Musk's Bionic Eyes Are Here.
Quote from: hamdani yusuf on 16/02/2024 03:30:03Elon Musk's Bionic Eyes Are Here.About 40 years behind the brain-computer interfaces I saw at Vienna Technical University.
Introducing Sora, our text-to-video model.Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.We?ll be taking several important safety steps ahead of making Sora available in OpenAI?s products. We are working with red teamers ? domain experts in areas like misinformation, hateful content, and bias ? who are adversarially testing the model.All the clips in this video were generated directly by Sora without modification.Learn more about Sora: https://openai.com/soraChapters:00:09 Dancing Kangaroo00:22 Snow Dogs00:43 River Birds00:55 Petri Dish Pandas01:08 Big Sur01:21 Movie Trailer Astronaut01:40 Coffee Pirates01:57 Tokyo Snow02:09 Cyberpunk Robot02:30 Candle Monster02:43 The Offroader03:04 Paper Origami03:27 Nosy Cat03:38 Woolly Mammoths03:51 Lagos04:14 Television Gallery04:37 Cloud Reader04:59 Miniature Construction 05:11 Gold Rush Aerial 05:38 Fairytale Furball 05:49 Amalfi Coast Aerial06:12 Tokyo Tourist06:31 Blossoming Flower06:42 Art Museum 07:05 Solemn Gentleman07:28 Eye Close-up07:47 Chinese New Year07:58 Surfing Otter08:17 Dalmatian in the Window08:31 Tokyo Train08:42 Zen Garden Gnome08:53 Flock of Paper Planes09:16 Lost Lone Wolf
Sora, the text-to-video model from OpenAI, is here. I go over the bonus details and demos released in the last few hours, and the technical paper. I?ll also give you a glimpse of what?s to come next and a host of implications. Even if you?ve seen every Sora video, I bet you won?t know all of this!
So we are creating a "successor superspecies" without the consent of 99.99% of humanity.I am very impressed that Silicon Valley is still committed to "diversity, equity, and inclusion" in the work place.The horse and engine analogy is really apt right now. The mass replacement of horses didn?t happen overnight because we didn?t just need engines, we needed tires, cars, roads, a Highway Code, driver?s licensing, not to mention massive oil infrastructure and economies of scale to make things affordable. With AI we need robotics, cloud infrastructure, a new legal/ethical framework, much bigger scale GPU production, etc. So we?ll have AGI very soon but rollout will still take a few years while we reorganize around it. The key difference now is that AGI could autonomously orchestrate a lot of the rollout for us.AGI becomes ASI overnight.
So, truth in, garbage out. The ultimate weapon of politics, philosophy, religion and economics. As with paper publishing, more bandwidth = more bullshit.