0 Members and 2 Guests are viewing this topic.
Timestamps:00:03 Q* is the biggest breakthrough since Word2Vec02:34 Q* is a hybridization of q-learning and AAR algorithm, capable of accurate math calculations.04:44 The Q* algorithm has the potential to unlock a new classification of problems that can be solved.07:11 A seismic shift has occurred at OpenAI regarding AGI achievement according to a leaked letter.09:39 Qualia has demonstrated an ability to improve optimal action selection policies and apply it to cross-domain learning.11:59 OpenAI's Q* has achieved impressive decryption abilities without the need for keys.14:33 Q* can significantly disrupt cryptography and achieve feats that were thought to be only possible for Quantum Computing.16:50 Q* is a significant advancement with the potential to solve math problems like AGI.18:59 OpenAI's Q* has the potential for self-transformation and creative problem solving00:00 OpenAI's Q* is a significant advancement in AI technology
TIMELINE[00:00] Into[06:14] ORCA 2 and Synthetic Data [14:42] The Q* Hypothesis [28:46] Dr Jim Fan[30:00] Wait But Why by Tim Urban[31:31] Dr Jim Fan cont.[36:44] Jimmy Apples
What is Q-Learning and how does it work? A brief tour through the background of Q-Learning, Markov Decision Processes, Deep Q-Networks, and other basics necessary to understand Q* OUTLINE:0:00 - Introduction2:00 - Reinforcement Learning7:00 - Q-Functions19:00 - The Bellman Equation26:00 - How to learn the Q-Function?38:00 - Deep Q-Learning42:30 - Summary
The phrase "synthetic data" should strike terror into the heart of any rational human. It's the stuff of Trumpism.
Dangerous corruption of language. Data is stuff you measure. Test data is fair enough: an input intended to test your black box, but it's only useful if you know what the output should be - and that's a problem for the Believers in AI because by definition you don't know what to expect. But "synthetic data" is just lies - fake news with numbers.
It's not synthetic but simulated as close to reality as possible. If it was wholly synthetic it would be no use. The old Hong Kong Kai Tak airport had a very unusual and complicated approach with all sorts of skyscrapers and other towers to avoid, plus various noise restrictions, so the sims were necessarily very accurate representations of reality, and very different from Boston Logan or London Heathrow. You could certainly synthesise an entirely fictional airport or warehouse, but what would be the point?
Google's GNoME AI, developed by the team behind AlphaFold, is transforming material science by rapidly predicting the structure of new materials. This breakthrough AI tool significantly impacts fields like solar energy, battery development, and computer chip manufacturing, offering efficient and sustainable solutions. GNoME's ability to analyze millions of materials quickly showcases a monumental advancement in material discovery and technology.
A breakthrough in artificial intelligence at OpenAI preceded former CEO Sam Altman's firing and was part of a list of the board's grievances, according to sources cited in a Reuters report. Andrew Chang explains what we know about the AI technology referred to as Q*, and also breaks down the gap between current AI technology and "human" intelligence.
What's important is that relevant/significant parts are preserved in the synthesized data, while irrelevant/misleading parts are discarded.
Quote from: hamdani yusuf on 30/11/2023 09:32:26What's important is that relevant/significant parts are preserved in the synthesized data, while irrelevant/misleading parts are discarded.In other words, it's a sim, not a synth.
What's generated by Alpha Zero by self playing is synthetic data.
Quote from: hamdani yusuf on 05/12/2023 06:37:04What's generated by Alpha Zero by self playing is synthetic data.So AI is a complicated and unsatisfying form of masturbation? Might account for the sort of garbage that chatbots deliver.
I've been wondering about this idea since high school. I built and used a simple analog calculator for my scientific research competition using op-amps.
Since both chess and go are mathematically trivial, it is entirely likely that a machine can beat a human at either.
0:02At the heart of it, a Large Language Model or LLM is just two files.The first file is like about 500 lines of C-language code.The second file is just hundreds of billions or trillions of seemingly random numbers, the "parameters". But this is where the magic happens.0:23Based on current evaluations - which have their shortcomings, yes - the more parameters the model has, and the more tokens they are trained on, the more capable they get.0:34The models themselves are economically valuable. They carry proprietary trade secrets and - when separated from their safety systems - can exhibit malicious capabilities.0:45The data that helped train those models is also valuable. Nowadays, good and useful LLM data is produced at considerable cost, often by educated workers.0:56If more and better data creates better models, then there is significant commercial incentive for state actors, smaller and less ethical AI labs, or even just hacktivists to bootstrap their performance by stealing from a leader.1:11What if someone stole GPT-4? We should be talking about this risk. In this video, a few thoughts about protecting these LLMs from theft.