0 Members and 1 Guest are viewing this topic.
Quote from: hamdani yusuf on 25/12/2022 00:54:44It requires us to determine who can access the off switch, and in what situation the off switch is accessible. The determination depends on the terminal goal of the system.No, the immediate goal of the bloke who switched it on.
It requires us to determine who can access the off switch, and in what situation the off switch is accessible. The determination depends on the terminal goal of the system.
What if he was already dead when the AI starts to misbehave?
Any short term goals can't be universally terminal goal.
Cut the power cable or put a bomb under the machine.
There is no universal terminal goal.
Physicists have known that it's possible to control chaotic systems without just making them even more chaotic since the 1990s. But in the past 10 years this field has really exploded thanks to machine learning. 00:00 Intro00:47 Chaos is Everywhere03:08 The Lorenz-Model04:39 Chaos Control06:54 The Double Pendulum08:12 Applications of Chaos Control09:48 Chaos Control for Nuclear Fusion
Here, researchers from DeepMind present AlphaCode, an AI-assisted coding system that can achieve approximately human-level performance when solving problems from the Codeforces platform, which regularly hosts international coding competitions. Using self-supervised learning and an encoder-decoder transformer architecture, AlphaCode solved previously unseen, natural language problems by iteratively predicting segments of code based on the previous segment and generating millions of potential candidate solutions. These candidate solutions were then filtered and clustered by validating that they functionally passed simple test cases, resulting in a maximum of 10 possible solutions, all generated without any built-in knowledge about the structure of computer code.AlphaCode performed roughly at the level of a median human competitor when evaluated using Codeforces’ problems. It achieved an overall average ranking within the top 54.3% of human participants when limited to 10 submitted solutions per problem, although 66% of solved problems were solved with the first submission.https://scitechdaily.com/rise-of-the-machines-deepmind-alphacode-ais-strong-showing-in-programming-competitions/
Will Your Code Write Itself?Artificial Intelligence solutions are taking over software development tasks. Where is this going?
https://www.aol.com/finance/90-online-content-could-generated-201023689.html90% of online content could be ‘generated by AI by 2025,’ expert saysGenerative AI, like OpenAI's ChatGPT, could completely revamp how digital content is developed, said Nina Schick, adviser, speaker, and A.I. thought leader told Yahoo Finance Live (video above)."I think we might reach 90% of online content generated by AI by 2025, so this technology is exponential," she said. "I believe that the majority of digital content is going to start to be produced by AI. You see ChatGPT... but there are a whole plethora of other platforms and applications that are coming up."The surge of interest in OpenAI's DALL-E and ChatGPT has facilitated a wide-ranging public discussion about AI and its expanding role in our world, particularly generative AI."ChatGPT has really captured the public imagination in an extremely compelling way, but I think in a few months' time, ChatGPT is just going to be seen as another tool powered by this new form of AI, known as generative AI," she said.It's important to understand what exactly generative AI is – and what it isn't."What generative AI can do, essentially, is create new things that would have thus far been seen as unique to human intelligence or creativity," she said. "Generative AI can create across all media, so text, video, audio, pictures – every digital medium can be powered by generative AI. So, I think these valuations that you're seeing for OpenAI are actually going to go up and you're going to start to see even more generative AI companies which have universal applications across many industries in 2023."This is all still really new, as applications for generative AI have "only really [been] coming to the fore in the last 24 to 6 months," added Schick.'The pace of acceleration is so incredible'The generative AI space is set to get far more competitive in the next year, Schick said, who expects to see companies like Google parent Alphabet (GOOG, GOOGL), Microsoft (MSFT), and Apple (AAPL) do "a lot more" in the space.Though much has been said about the extent to which ChatGPT may or may not present an existential threat to Google's search dominance, Schick said she expects to see Google compete rather than wither."There's been a lot of debate about whether OpenAI is an existential threat to Google – the fact that Microsoft is an investor in OpenAI, the fact that ChatGPT is going to be integrated into Bing, if that's going to challenge the dominance of Google," said Schick. "Although that's a fantastic story, there's no doubt Google is developing its own generative AI tools with the amount of data that they have, the amount of data they have."Though it's complicated, the extent to which ChatGPT in its current form is a viable Google competitor, there's little doubt of the possibilities. Meanwhile, Microsoft already has invested $1 billion in OpenAI, and there's talk of further investment from the enterprise tech giant, which owns search engine Bing. The company is reportedly looking to invest another $10 billion in OpenAI.Ultimately, look for the generative AI space to start changing fast."The pace of acceleration is so incredible that these tools – which are shocking and awing us at the beginning of 2023 – are going to seem quite quaint by the end of the year because the capabilities are just going to increase so powerfully," Schick said.
In the next five years, it is likely that AI will begin to reduce employment for college-educated workers. As the technology continues to advance, it will be able to perform tasks that were previously thought to require a high level of education and skill. This could lead to a displacement of workers in certain industries, as companies look to cut costs by automating processes. While it is difficult to predict the exact extent of this trend, it is clear that AI will have a significant impact on the job market for college-educated workers. It will be important for individuals to stay up to date on the latest developments in AI and to consider how their skills and expertise can be leveraged in a world where machines are increasingly able to perform many tasks.
Sam Altman — the CEO of OpenAI, which is behind the buzzy AI chat bot ChatGPT — said that the company will develop ways to help schools discover AI plagiarism, but he warned that full detection isn't guaranteed."We're going to try and do some things in the short term," Altman said during an interview with StrictlyVC's Connie Loizos. "There may be ways we can help teachers be a little more likely to detect output of a GPT-like system. But honestly, a determined person will get around them."Altman added that people have long been integrating new technologies into their lives — and into the classroom —and that those technologies will only generate more positive impact for users down the line."Generative text is something we all need to adapt to," he said. "We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well."The CEO's comments come after schools that are part of the New York City Department of Education and Seattle Public School system banned students and teachers from using ChatGPT to prevent plagiarism and cheating.The bans have ignited conversations — especially among teachers — over how AI could transform the state of education and the ways that students learn at-large."I get why educators feel the way they feel about this," Altman said. "This is just a preview of what we're gonna see in a lot of other areas."But even though OpenAI has heard from teachers "who are understandably very nervous" about ChatGPT's impact on things like homework, the company has also heard from them that the chat bot can be "an unbelievable personal tutor for each kid," Altman said.In fact, Altman believes that using ChatGPT can be a more engaging way to learn."I have used it to learn things myself and found it much more compelling than other ways I've learned things in the past," he said. "I would much rather have ChatGPT teach me about something than go read a textbook."Altman said that OpenAI will experiment with watermarking technologies and other techniques to label content generated by ChatGPT, but he warns schools and national policy makers to avoid depending on these tools."Fundamentally, I think it's impossible to make it perfect," he said. "People will figure out how much of the text they have to change. There will be other things that modify the outputted text."Given how popular ChatGPT has become, Altman believes that the world must adapt to generative AI and that technology will improve over time to prevent unintended consequences."It's an evolving world," Altman said. "We'll all adapt, and I think be better off for it. And we won't want to go back."
As someone who's very involved with tech, this discovery is both terrifying and exciting for me. The tech that I will be showing in this video has the potential to be extremely useful but also incredibly horrifying.⭐️ Timestamps ⭐️00:00 | GPT-3 Is Insane00:53 | GPT-3 Demo07:21 | Virtual Machine in GPT-307:46 | Final Thoughts
The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?David Deutsch is a physicist at the University of Oxford and a fellow of the Royal Society. His latest book is The Beginning of Infinity.That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote?
I tried using AI. It scared me.ALTERNATE TITLES:Crypto and the metaverse aren't the future. AI is.I just wanted to fix my email.I tried ChatGPT and had a minor existential crisisEverything is about to changeChatGPT is Napster, 24 years later.ChatGPT is 2023's Napster.CHAPTERS0:00 Intro0:07 I just wanted to fix my email2:39 Gmail's label system sucks5:35 Wait, I can fix this with code7:36 It can't be that good, right?11:31 Everything is going to change
Originally, Deep Learning sprang into existence inspired by how the brain processes information, but the two fields have diverged ever since. However, given that deep models can solve many perception tasks with remarkable accuracy, is it possible that we might be able to learn something about how the brain works by inspecting our models? I speak to Patrick Mineault about his blog post "2021 in review: unsupervised brain models" and we explore why neuroscientists are taking interest in unsupervised and self-supervised deep neural networks in order to explain how the brain works. We discuss a series of influential papers that have appeared last year, and we go into the more general questions of connecting neuroscience and machine learning.OUTLINE:0:00 - Intro & Overview6:35 - Start of Interview10:30 - Visual processing in the brain12:50 - How does deep learning inform neuroscience?21:15 - Unsupervised training explains the ventral stream30:50 - Predicting own motion parameters explains the dorsal stream42:20 - Why are there two different visual streams?49:45 - Concept cells and representation learning56:20 - Challenging the manifold theory1:08:30 - What are current questions in the field?1:13:40 - Should the brain inform deep learning?1:18:50 - Neuromatch Academy and other endeavours
Discussing the latest events surrounding large language models, chatbots, and search engines with respect to Microsoft and Google.
In this episode we look at the problem of ChatGPT's political bias, solutions and some wild stories of the new Bing AI going off the rails.
Competition will force any conscious entity to be more effective and efficient in trying to get closer to the universal terminal goal.
The first objective of competition is not to lose.
Leo Dirac (@leopd) talks about how LSTM models for Natural Language Processing (NLP) have been practically replaced by transformer-based models. Basic background on NLP, and a brief history of supervised learning techniques on documents, from bag of words, through vanilla RNNs and LSTM. Then there's a technical deep dive into how Transformers work with multi-headed self-attention, and positional encoding. Includes sample code for applying these ideas to real-world projects.
That's one of the best deep learning related presentations I've seen in a while! Not only introduced transformers but also gave an overview of other NLP strategies, activation functions and also best practices when using optimizers.
Beginning of the quality standardizationWorld War II devastated most of Europe. Winston Churchill first proposed the concept of a “United States of Europe” in 1946. As treaties evolved and countries rebuilt, they found that there were many aspects of businesses that were incompatible from country to country. Quality standards were very diverse, and the need for a single standard led to the creation of what we now know as ISO 9001.
“Write down what you do” refers to documenting the processes and their interactions within your organization. “Do what you write down” describes the actions you take to realize your products and services and ensure that they yield the desired outcomes. “Make sure you are doing it” refers to what we know today as QMS auditing. That is, on an ongoing basis, conducting proactive audits to ensure that the processes are effective for their intended use and verify the operator’s ongoing competence.
Along with the widespread implementation of the standard, professional organizations blossomed, and entire conferences were convened on the topic of quality management. TC 176 gathered vast amounts of data on implementation techniques and auditing practices. They also found that the 1987 standard was developing controversy and confusion as it was implemented in a wide variety of countries, industries, and organizations.There was also strife about the early adopters interpreting “write down what you do” as documenting everything in the organization. As a result, many organizations became paper mills of manuals, procedures, and forms.