0 Members and 1 Guest are viewing this topic.
We live in a world that is overwhelmed with data. And for network scientist Albert-L?szl? Barab?si, delving into the underlying structure and relationships that govern our complex systems is at the root of understanding their inner workings. Moving beyond the concept of random connections, Barabas?'s pioneering research has led to the discovery of a more authentic representation of how these systems are structured.Exploring real-world connections, Barabas?'s journey began with the vast universe of the internet. What he found was nothing short of astonishing ? the intricate web of connections did not follow the patterns of randomness, as previously thought, but instead followed a power load distribution: what Barabas? came to call ?scale free networks.?Barabas?'s visionary work sheds light on the tendency for new connections in our networks to gravitate toward the already well-connected. The discovery of scale-free networks, which materialize in various complex systems from cellular interactions to social networks, serves as an essential stepping stone in our quest to comprehend the awe-inspiring complexity arising from the countless interactions of the world's many moving parts.
Luminaries in the AI/Machine Learning space like Ilya Sutskever, chief scientist at OpenAI, believe that Large Language Models are in effect compression algorithms for human knowledge. And people like Stephen Wolfram believe that models (mathematical and otherwise) are a way to understand the universe around us given our limited computational abilities. What happens when you combine these two concepts and throw in Tesla's Full Self Driving (FSD) and Optimus Teslabot? Let's find out!
Access to I/O interface with the real world is necessary to automate learning of AI models, so they can learn causality from their own experiences. Otherwise, they will be like brains in the vat.
https://openai.com/blog/democratic-inputs-to-aiDemocratic Inputs to AIOur nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest. Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user?s country, the user?s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions. AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.
Instructions for participationTo apply for a grant, we invite you to submit the required application material by 9:00 PM PST June 24th, 2023. You can access the application portal here. You will be prompted to answer a series of questions regarding your team's background, your choice of questions, high level details of your proposed tool as well as your plan for conducting and evaluating the democratic process with these factors in mind. We would like you to design your approach to address one or more of the policy questions from the list provided. Anyone (individuals or organizations) can apply for this opportunity, regardless of their background in social science or AI.Once the application period closes, we hope to select ten successful grant recipients. Recipients may be individuals, teams, or organizations. Each recipient will receive a $100,000 grant to pilot their proposal as described in their application materials. Grant recipients are expected to implement a proof-of-concept / prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20, 2023. Additionally, as part of the grant program, any code or other intellectual property developed for the project will be required to be made publicly available pursuant to an open-source license. The terms applicable to grant recipients are specified in the Grant Terms and any other agreements that grant recipients may be asked to enter into with us in connection with this program.
AI Development is moving very fast. Microsoft will fully integrate AI into Windows 11 and give it access to your files and settings. Tesla is using AI to track how you drive to decide how much you have to pay for insurance. Does this mean we are watched by AI 24/7?But there are also very good and care free news: SD XL is 50% done and looks amazing. Nvidia Promisses 2x the speed for SD ONNX Models with Microsoft Olive. Stable Diffusion Introduces Reimagine. A way to create more image variations without any prompting.
From the perspective of future conscious entities, the only legitimate ways of accumulating resources are by riding the wave of demonetization. It means that the resource accumulation is meant to make generating necessary resources in the future easier.
Money Laundering, International Scams, and a man behind the curtains. This is the story of Traders Domain, which started in early February with a strange text about an offshore scam. From there things spiraled quickly out of control. Enjoy Part 1. This video is an opinion and in no way should be construed as statements of fact. Scams, bad business opportunities, and fake gurus are subjective terms that mean different things to different people. I think someone who promises $100K/month for an upfront fee of $2K is a scam. Others would call it a Napoleon Hill pitch.
In this video, Peter explores JP Morgan's remarkable profits amid the ongoing banking crisis. At the same time, the Federal Reserve's risky plans pose potential consequences for Wall Street, all with taxpayer bailouts in sight.
Evidence is mounting that US senators and members of Congress are using insider knowledge on major policy decisions and looming crises to game the stock market. And they think it?s totally okay.
The ?Knowledge Doubling Curve? is a lie, here?s why
New algorithms will transform the foundations of computingDigital society is driving increasing demand for computation, and energy use. For the last five decades, we relied on improvements in hardware to keep pace. But as microchips approach their physical limits, it?s critical to improve the code that runs on them to make computing more powerful and sustainable. This is especially important for the algorithms that make up the code running trillions of times a day. In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms ? surpassing those honed by scientists and engineers over decades. AlphaDev uncovered a faster algorithm for sorting, a method for ordering data. Billions of people use these algorithms everyday without realising it. They underpin everything from ranking online search results and social posts to how data is processed on computers and phones. Generating better algorithms using AI will transform how we program computers and impact all aspects of our increasingly digital society. By open sourcing our new sorting algorithms in the main C++ library, millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management. This is the first change to this part of the sorting library in over a decade and the first time an algorithm designed through reinforcement learning has been added to this library. We see this as an important stepping stone for using AI to optimise the world?s code, one algorithm at a time. Optimising the world?s code, one algorithm at a timeBy optimising and launching improved sorting and hashing algorithms used by developers all around the world, AlphaDev has demonstrated its ability to generalise and discover new algorithms with real-world impact. We see AlphaDev as a step towards developing general-purpose AI tools that could help optimise the entire computing ecosystem and solve other problems that will benefit society.While optimising in the space of low-level assembly instructions is very powerful, there are limitations as the algorithm grows, and we are currently exploring AlphaDev?s ability to optimise algorithms directly in high-level languages such as C++ which would be more useful for developers.AlphaDev?s discoveries, such as the swap and copy moves, not only show that it can improve algorithms but also find new solutions. We hope these discoveries inspire researchers and developers alike to create techniques and approaches that can further optimise fundamental algorithms to create a more powerful and sustainable computing ecosystem.
Time for something different - a tour of ChatGPT getting things wrong, including a whole new category of errors that you might find illuminating, concerning or just entertaining. From investigating whether GPT 4 does indeed have theory of mind, to how easily it is jailbroken, to testing Inflection 1, Bard and Claude on the same puzzle that flummoxes ChatGPT to arguing that GPT 4 will just double down on bad logic, this video showcases GPT getting irrational.
0:00 Start!03:22 Best of 202205:20 LLMs: 100k in 6 months08:41 Data09:36 Imitation models11:01 Customers12:38 Robots15:02 Next up in 202318:49 Full steam ahead19:47 A note of caution22:04 A note of peace
Critical weaknesses must be discovered and solved before the AI models are given any responsibility to make decisions affecting people's lives.
Quote from: hamdani yusuf on 27/06/2023 14:56:29Critical weaknesses must be discovered and solved before the AI models are given any responsibility to make decisions affecting people's lives.Not a problem. All decisions are taken by a person, never a machine. Where a machine has been instructed to do something autonomously, the person giving that instruction is liable for the outcome. I find myself oddly in agreement with the National Rifle Association on this one - guns don't kill, people do.
But who executes those decisions? On whose behalf and for whose benefit? "Befehl ist befehl" has been rejected as a defence since the Nuremberg trials. Machines may give advice, but people make decisions. Including the decision to switch off a machine that is misbehaving.
Google Cloud has a new artificial-intelligence tool that tackles money laundering for banks. But how does this product differ from those already on the market?WSJ reporter Dylan Tokar joins host Julie Chang to discuss.0:00 Anti-money-laundering1:54 Google?s plan3:22 How is Google Cloud different?4:34 Response to Google?s tool