Do you follow these advances closely to explain what is actually happening?No, not really. Just enough not to be completely left behind. It's something that is quite likely to have some impact on most people in most walks of life.
Is it simply training statistically a model so that it can put together words that make sense?What do you mean by "statistical model"? Some aspects are statistically based and machine learning is very much about training a system on lots of input and allowing it to modify the processing until a suitable output is reliably obtained. The modifications are not always random but guided by a return or feedback from the output in various ways. In the early stages of training ChatGPT there was human supervised learning so that humans could influence the processing directly and/or certainly provide more feedback to rapidly shape the neural network. In later stages the Machine Learning becomes much more autonomous.
we can use ChatGPT to replace the need for anyone to actually be here and give replies to those (or any) posts.As a moderator, I'd be very worried if anyone thought the kind of repetitive and unreferenced crap that ChatGPT produces was characteristic of the level of intelligence and clarity that characterises the better replies in this forum.
Obviously ChatGPT is something unprecedented from the machine learning sector.I wouldn’t call it unprecedented. Chat GPT 4 is merely an extension of Chat GPT 3.5 - with more input data, a more powerful training processor, and a much larger in-memory model during execution.
we can use ChatGPT to replace the need for anyone to actually be here and give replies to those (or any) posts.As a moderator, I'd be very worried if anyone thought the kind of repetitive and unreferenced crap that ChatGPT produces was characteristic of the level of intelligence and clarity that characterises the better replies in this forum.
If ChatGPT could be Embedded into the ForumIt can't.see the forum. It has no internet access. It is strictly a server, answering incoming requests and putting out no queries of its own.
MODS could Utilize the A.I.s rewards & corrections Inputs to Teach it to be Better.It is incapable of learning. You can correct it and it might acknowledge the correction, but ask the same question again tomorrow and it's just as likely to get it wrong again, but of course worded differently.
it could learn Alot from Us.It can't unfortunately, hence it can never qualify as a general intelligence, or an AGI. It's why they keep coming out with new versions with more up to date training data and more powerful servers. The new versions still can't learn, but they supposedly give a lower percentage of wrong answers and can hold a conversation longer without forgetting how it started.
It is incapable of learning.From us out here. In development by OpenAI some of the learning was human supervised but the later development was much more autonomous. A lot of this was already done just for the transformer architecture rather then the very specific implementation and incorporation of that into the ChatGPT system. For example, it was trained on most of the contents of the internet (from an offline archive of the contents of the internet) as it was about 3 years ago. The precise details of what was done is not known to me but we can simplify it to something like this: I want to be very clear here - I am just presenting a very simple version of something that can be done just because it will be easier to understand it this way. If you want actual details then you need to review the links already given in earlier posts etc.
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
Why do They not provide IT with Sensors?They will, eventually. Tesla's Optimus and Google's robot are going in that direction.
Visual, audio, temperature, infrared, night vision, motion detection etc etc.
Then when it senses Movement, it could Analyze what it is, n if it's a Rat, then IT could sound an Alarm.
Meow Meow or Boo BooH!
Then when it senses Movement, it could Analyze what it is, n if it's a Rat, then IT could sound an Alarm.Why not? Because a dog will get off its backside and kill the rat. And deal with any other intruder. And sense if you are sick or miserable. And keep you warm at night.
In this video, I will not only show you how to get smarter results from GPT 4 yourself, I will also showcase SmartGPT, a system which I believe, with evidence, might help beat MMLU state of the art benchmarks.
This should serve as your ultimate guide for boosting the automatic technical performance of GPT 4, without even needing few shot exemplars.
The video will cover papers published in the last 72 hours, like Automatically Discovered Chain of Thought, which beats even 'Let's think Step by Step' and the approach that combines it all.
Yes, the video also touches on the OpenAI DeepLearning Prompt Engineering Course but the highlights come more from my own experiments using the MMLU benchmark, and drawing upon insights from the recent Boosting Theory of Mind, and Let?s Work This Out Step By Step, and combining it with Reflexion and Dialogue Enabled Resolving Agents.
Prompts Frameworks:
Answer: Let's work this out in a step by step way to be sure we have the right answer
You are a researcher tasked with investigating the X response options provided. List the flaws and faulty logic of each answer option. Let's work this out in a step by step way to be sure we have all the errors:
You are a resolver tasked with 1) finding which of the X answer options the researcher thought was best 2) improving that answer, and 3) Printing the improved answer in full. Let's work this out in a step by step way to be sure we have the right answer:
Let's hear what an AI researcher say about it.Honestly, reading about how quickly AI is evolving makes me a little nervous. I'm not a conspiracy theorist, but people seem too invested in this technology and could pay for it.
GPT 4 is Smarter than You Think: Introducing SmartGPTQuoteIn this video, I will not only show you how to get smarter results from GPT 4 yourself, I will also showcase SmartGPT, a system which I believe, with evidence, might help beat MMLU state of the art benchmarks.
This should serve as your ultimate guide for boosting the automatic technical performance of GPT 4, without even needing few shot exemplars.
The video will cover papers published in the last 72 hours, like Automatically Discovered Chain of Thought, which beats even 'Let's think Step by Step' and the approach that combines it all.
Yes, the video also touches on the OpenAI DeepLearning Prompt Engineering Course but the highlights come more from my own experiments using the MMLU benchmark, and drawing upon insights from the recent Boosting Theory of Mind, and Let's Work This Out Step By Step, and combining it with Reflexion and Dialogue Enabled Resolving Agents. I am also preparing a large-scale study on this topic because this topic interests me greatly as an author at https://ca.edubirdie.com/buy-an-essay where I have been writing plagiarism free papers. All the university communities I know are excited about the progress of AI, and we have to see where it leads.
Prompts Frameworks:
Answer: Let's work this out in a step by step way to be sure we have the right answer
You are a researcher tasked with investigating the X response options provided. List the flaws and faulty logic of each answer option. Let's work this out in a step by step way to be sure we have all the errors:
You are a resolver tasked with 1) finding which of the X answer options the researcher thought was best 2) improving that answer, and 3) Printing the improved answer in full. Let's work this out in a step by step way to be sure we have the right answer:
Why Does It Work?If by "work" you mean recycling other people's text, it works because that is what it is designed to do. If you mean creating useful materials, objects or ideas, it doesn't.
recycling other people's textMost science is just recycling other people's words.
- Separating the gold and gems from the stream of derivative drivel is left as an exercise for the reader...
If you mean creating useful materials, objects or ideas, it doesn't.Creating ideas is easy. Just make up new random bits of information never used before. Creating materials and objects require access to physical world, which is being developed with embodied AI. By chance, we may find some of them useful. The AI can measure the usefulness of their new ideas through social selection, e.g. by like/dislike ratio of user's feedback.
skbldnk drvkug bwqpn zzicologyDo you notice that Edison tried and failed thousands of times before successfully produced technically and economically viable light bulbs? He might have thought even many more experiments which he did not continue to carry out because he expected they would fail.
It is left to the reader to pick out the useful bits.
The best definition of industry I ever heard was "organising men, materials, machines and money to make stuff that people want." Note the last four words.
But he started with a specification of a product that people wanted (an electrical light source, safer than a carbon arc)Then he needed to imagine things that hadn't existed yet.
That was a croatskbldnk drvkug bwqpn zzicologyDo you notice that Edison tried and failed thousands of times before successfully produced technically and economically viable light bulbs? He might have thought even many more experiments which he did not continue to carry out because he expected they would fail.
It is left to the reader to pick out the useful bits.
The best definition of industry I ever heard was "organising men, materials, machines and money to make stuff that people want." Note the last four words.
We are prone to survival bias that we often forget that those who don't survive had once existed.
Do you mean Edison didn't do failed experiments?That was a croatskbldnk drvkug bwqpn zzicologyDo you notice that Edison tried and failed thousands of times before successfully produced technically and economically viable light bulbs? He might have thought even many more experiments which he did not continue to carry out because he expected they would fail.
It is left to the reader to pick out the useful bits.
The best definition of industry I ever heard was "organising men, materials, machines and money to make stuff that people want." Note the last four words.
We are prone to survival bias that we often forget that those who don't survive had once existed.
https://en.m.wikipedia.org/wiki/Franjo_Hanaman
Why Is ChatGPT Bad At Math?
Sometimes, you ask ChatGPT to do a math problem that an arithmetically-inclined grade schooler can do with ease. And sometimes, ChatGPT can confidently state the wrong answer. It's all due to its nature as a large language model, and the neural networks it uses to interact with us.
Jump to the:
- introduction by Professor Harry Atwater: 12:45
- profile video of Professor Abu-Mostafa: 16:55
- start of the lecture: 20:01
ChatGPT has rocked the general public's perception and expectations of artificial intelligence (AI). In this lecture, Abu-Mostafa will explain the science of AI in plain language and explore how the scientific details illustrate the risks and benefits of AI. Between the extremes of "AI will kill us all" and "AI will solve all our problems," the science can help us identify what is realistic and what is speculative, and guide us in our planning, legislation, and investment in AI.
ChatGPT is now the fastest-growing consumer app in human history. Problem is, almost no one knows how it actually works. This is everything you need to know.
The " Noone knows how it Works " refers to the Black Box problem, Right?Yes. But it may change in the future. Some research is developing AI models especially designed to explain decision making process of other AI models.
" Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned. "Unless someone ignores it or pulls out the plug.
A tad bit Worrying...Humans have even higher error rates. Nonetheless, we survive, so far. Even with nukes available for decades.
Nobody knows how it Works.
Opens up the possibility of it Miscalculating.
If 99 times out of 100, it says 2+2=4, & gets it wrong once, is Totally Fine.
Can only Hope autonomous ai systems won't be deployed for Nukes.
99.999% accuracy with an Error margin of just 00.001% will be
Game Over!
A tad bit Worrying...Humans have even higher error rates. Nonetheless, we survive, so far. Even with nukes available for decades.
Nobody knows how it Works.
Opens up the possibility of it Miscalculating.
If 99 times out of 100, it says 2+2=4, & gets it wrong once, is Totally Fine.
Can only Hope autonomous ai systems won't be deployed for Nukes.
99.999% accuracy with an Error margin of just 00.001% will be
Game Over!
What's needed is some reliable system for check and balance, and acceptable common goals.
It kept making the same mistakes.And doubtless reinforcing them because it believes what it has already written!
I'm not sure if Bard has a rigid boundary between training mode and deployment mode. I've read some articles describing AI models which can still learning from new data even when they're running in deployment mode.It kept making the same mistakes.And doubtless reinforcing them because it believes what it has already written!
How do transformers like ChatGPT learn and represent words?
Transformers are a type of neural network architecture that are used in natural language processing tasks like language translation, language modelling, and text classification. They are effective at converting words into numerical values, which is necessary for AI to understand language. There are three key concepts to consider when encoding words numerically: semantics (meaning), position (relative and absolute), and relationships and attention (grammar). Transformers excel at capturing relationships and attention, or the way words relate to and pay attention to each other in a sentence. They do this using an attention mechanism, which allows the model to selectively focus on certain parts of the input while processing it. In the next video, we will look at the attention mechanism in more detail and how it works.
We can encode word semantics using a neural network to predict a target word based on a series of surrounding words in a corpus of text. The network is trained using backpropagation, adjusting the weights and biases of the input and hidden layers until the updates become negligible and the network is said to be "trained". The weights connecting the input neurons to the hidden layer will then contain an encoding of the word, with similar words having similar encodings. This allows for more efficient processing and a better understanding of the meaning and context of words in the language model.
'Attention is all you need' paper - https://arxiv.org/pdf/1706.03762.pdf
=========================================================================
Transformers are a type of artificial intelligence (AI) used for natural language processing (NLP) tasks, such as translation and summarization. They were introduced in 2017 by Google researchers, who sought to address the limitations of recurrent neural networks (RNNs), which had traditionally been used for NLP tasks. RNNs had difficulty parallelizing, and tended to suffer from the vanishing/exploding gradient problem, making it difficult to train them with long input sequences.
Transformers address these limitations by using self-attention, a mechanism which allows the model to selectively choose which parts of the input to pay attention to. This makes the model much easier to parallelize and eliminates the vanishing/exploding gradient problem.
Self-attention works by weighting the importance of different parts of the input, allowing the AI to focus on the most relevant information and better handle input sequences of varying lengths. This is accomplished through three matrices: Query (Q), Key (K) and Value (V). The Query matrix can be interpreted as the word for which attention is being calculated, while the Key matrix can be interpreted as the word to which attention is paid. The eigenvalues and eigenvectors of these matrices tend to be similar, and the product of these two matrices gives the attention score.
Apparently the hoo-ha about AI is speculative about the future or what they think is in the pipe line, at present they have Narrow AI which is basically a computer programme.It's an old article, which was written before ChatGPT and its competitors. It's highly likely that the author has changed his opinion by now.
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
24 October 2017 ? 11 minutes
Written by Eban Escott
What opinion? Chat gpt is the same level "AI" as spam filters.Apparently the hoo-ha about AI is speculative about the future or what they think is in the pipe line, at present they have Narrow AI which is basically a computer programme.It's an old article, which was written before ChatGPT and its competitors. It's highly likely that the author has changed his opinion by now.
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
What opinion? Chat gpt is the same level "AI" as spam filters.If the third type is possible. It's right there in the title.
If the third type is possible. It's right there in the title.There is only one form of "AI" at present the others are theoretical. At present it is a targeted computer programme. The title is
What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence
The same argument can be applied against human. Currently, there's no human general intelligence. No human individual has average expertise in every field of intelligence. It's even impossible for anyone to beat every expertexpert in every field.If the third type is possible. It's right there in the title.There is only one form of "AI" at present the others are theoretical. At present it is a targeted computer programme. The title isQuoteWhat are the 3 types of AI? A guide to narrow, general, and super artificial intelligence
Apparently the hoo-ha about AI is speculative about the future or what they think is in the pipe line, at present they have Narrow AI which is basically a computer programme.
https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
Yes there is, well at least I am, I can use the Internet, then paint a picture, then think about dinner. General. I can learn, not something AI is capable of, unless you have a general AI you are not telling the world about.The same argument can be applied against human. Currently, there's no human general intelligence.If the third type is possible. It's right there in the title.There is only one form of "AI" at present the others are theoretical. At present it is a targeted computer programme. The title isQuoteWhat are the 3 types of AI? A guide to narrow, general, and super artificial intelligence
Basically, in every frontier of technology, there are two main groups of people. The first are those who think that it's impossible. The other are those who disagree, and are motivated to prove that their opponents are wrong.No one is saying it is impossible, just like nuclear fusion in a sustained reaction, or the atom bomb in 1940, perpetuated flight in the days of da vinci, it just isn't currently present. Chat gpt has specific programming to search the internet and rehash information, if I ask it to create a picture it will not be able to as it is programmed to rehash text, not images. Also I cannot ask it to" try again differently", it requires specific input, otherwise the programme goes out of control, not general AI. AI bots have been known to stage wars
Yes there is, well at least I am, I can use the Internet, then paint a picture, then think about dinner. General. I can learn, not something AI is capable of, unless you have a general AI you are not telling the world about.What makes you think that AI cannot use internet? Or do any of those things you mentioned?
I can do anything I wish given enough time, anti gravity, nuclear fusion etc. I can learn, I realise I need to understand something better and try to ascertain the knowledge to do so. This is what general AI will understand also. Chat gpt refuses to paint no matter how much you ask it, perhaps its goung through adolescence.Yes there is, well at least I am, I can use the Internet, then paint a picture, then think about dinner. General. I can learn, not something AI is capable of, unless you have a general AI you are not telling the world about.What makes you think that AI cannot use internet? Or do any of those things you mentioned?
Can you detect cancer from radiograph? Or do brain surgery?
Don't you think that search engine algorithms are AI?
How do you define learning?
I can do anything I wish given enough time, anti gravity, nuclear fusion etc. I can learn, I realise I need to understand something better and try to ascertain the knowledge to do so. This is what general AI will understand also. Chat gpt refuses to paint no matter how much you ask it, perhaps its going through adolescence.Can you fly to outer space?
How do you define learning?Better yet, you can watch Neural Networks Learning in this video.
Timestamps
(0:00) Functions Describe the World
(3:15) Neural Architecture
(5:35) Higher Dimensions
(11:55) Taylor Series
(15:20) Fourier Series
(21:25) The Real World
(24:32) An Open Challenge
I did not realise chat gpt could emboldenperhaps its going through adolescence.
It can, if you ask it to. My point is, it's being superseded by newer, larger, multimodal, and more efficient AI models.I did not realise chat gpt could emboldenperhaps its going through adolescence.
It can, if you ask it to. My point is
What's your point?It can, if you ask it to. My point is
https://en.m.wikipedia.org/wiki/Neuromorphic_engineering
ps - Memristors.
Significant ethical limitations may be placed on neuromorphic engineering due to public perception.[51] Special Eurobarometer 382: Public Attitudes Towards Robots, a survey conducted by the European Commission, found that 60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure. The European Commission classifies these areas as notably ?human.? The report cites increased public concern with robots that are able to mimic or replicate human functions. Neuromorphic engineering, by definition, is designed to replicate the function of the human brain.[52]Without a strong fundamental thinking framework, public are easily confused between their common terminal goal and the means to achieve it.
#chatgpt is a program that can write programs. Could chatGPT write itself? Could it improve itself? Where could this lead? A video about code that writes code that writes code and how that could trigger an intelligence explosion. Sorry to contribute to the hype train but idc.Current version of ChatGPT can't do those things. But who knows what they will be capable of in the next few years?
QuoteSignificant ethical limitations may be placed on neuromorphic engineering due to public perception.[51] Special Eurobarometer 382: Public Attitudes Towards Robots, a survey conducted by the European Commission, found that 60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure. The European Commission classifies these areas as notably ?human.? The report cites increased public concern with robots that are able to mimic or replicate human functions. Neuromorphic engineering, by definition, is designed to replicate the function of the human brain.[52]
Without a strong fundamental thinking framework, public are easily confused between their common terminal goal and the means to achieve it.
The " Public " or People in general and at large, should Ideally be ' Free ' to willingly choose & decide their own Goals.Personal rights rely on natural selection mechanism. They force people to think and make decisions for their own. Future society members will be more likely to be those who made the correct decisions. If their goals aren't aligned with the universal terminal goal, then sooner or later they will become meaningless.
If i do Not want a " Robot " to carry out a general medical procedure on my body, then so it should be.
ps - I know, either Adapt to Change, or be left out...
To be Forgotten, is my Right.
Significant ethical limitations may be placed on neuromorphic engineering due to public perception.Nothing to do with ethics but the law will always require a person (including a "legal person" , i.e. a registered corporation) to be held liable for harm. Problem with a robot is that nobody knows who to blame, and any claim for damages will simply engender a chain of argument about hardware, software, local programming, training, environment, patient briefing, patient compliance.... with no actual money or even an apology reaching the victim.
60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure.People are plentiful, fairly good at doing these jobs, capable of adaptation, and do not incur any capital cost. Why use a robot? You might make a bigger profit in the short term, but your taxes will increase to pay for the unemployment benefit of the folk you replaced.
Fairly good won't be enough, when something better, faster, and cheaper become available. Telephone switch operators have been replaced by simple electronics long time ago. Any phone company tried to keep using them were simply outcompeted and quickly out of business.60% of European Union citizens wanted a ban of robots in the care of children, the elderly, or the disabled. Furthermore, 34% were in favor of a ban on robots in education, 27% in healthcare, and 20% in leisure.People are plentiful, fairly good at doing these jobs, capable of adaptation, and do not incur any capital cost. Why use a robot? You might make a bigger profit in the short term, but your taxes will increase to pay for the unemployment benefit of the folk you replaced.
So how do you intend to redeploy two million teachers, nurses and care workers in the UK alone?Find useful tasks which AI hasn't mastered yet. We should not confuse between terminal goals and instrumental goals.
If a robot rips off your arm instead of washing your face (like a human, it's quite capable of doing either) who do you sue?It depends on the circumstances.
The " Public " or People in general and at large, should Ideally be ' Free ' to willingly choose & decide their own Goals.Personal rights rely on natural selection mechanism. They force people to think and make decisions for their own. Future society members will be more likely to be those who made the correct decisions. If their goals aren't aligned with the universal terminal goal, then sooner or later they will become meaningless.
If i do Not want a " Robot " to carry out a general medical procedure on my body, then so it should be.
ps - I know, either Adapt to Change, or be left out...
To be Forgotten, is my Right.
The Goal might be simply " Survival at All Costs " .
Reminds me of a tune...
Yo ho, all together, hoist the colors high..
Heave ho, Thieves and Beggars,
Never Shall We Die!
Economic competition will make sure that future products and service providers are better, faster, and cheaper.
& Constitutional Fundamental Rights shall make Sure that Technology in the name of Progress, is Not shoved down nobodys throat Forcefully.
ps - Either learn to dance onto the new rhythms n beats, or simply exercise The Right to become Obsolete.
And it has a further weakness, it is unable to question itself.It can be solved by combination with generative adversarial networks.
I don't think so, no more than expecting people with different ideologies, upbringings, ideals, values and beliefs to be able to convince each other that only one of them have the truth. It's easier with pure logic, but logic is just one small part of what those AI's will meet.I mentioned GAN to specifically answer your concern about inability to question itself. For it to work, it needs access to the ground truths. For games like chess and go, the AI agents need to know all allowed moves, and conditions required for a win. They may not be entirely available in real life problems.
Here's a first hand information intended for general audience.Thank you... That's very Interesting and quite exciting really. My friend was just telling me about this and I find it fascinating!
The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TEDQuoteIn a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
Get an inside look at the AI supercomputer infrastructure built to run ChatGPT and other large language models, and see how to leverage it for your workloads in Azure, at any scale.
Go behind the scenes:
-For how we collaborated with NVIDIA to deliver purpose-built AI infrastructure with NVIDIA GPUs
-How Project Forge checkpointing works to restore job states if a long training job fails or needs to be migrated
-How we used LoRA fine-tuning to update a fraction of the base model for more training throughput and smaller checkpoints
-How UK-based company, Wayve, is using Azure's AI supercomputer infrastructure for self-driving cars
-And how Confidential Computing works with Azure AI to combine datasets without sharing personally identifiable information for secure multiparty collaborations.
Mark Russinovich, Azure CTO, joins Jeremy Chapman to break it down.
► QUICK LINKS:
00:00 - Introduction
01:15 - AI innovation building specialized hardware and software
04:22 - Optimizing hardware
05:40 - Improved throughput
06:17 - Project Forge
08:01 - Project Forge checkpointing demo
10:02 - LoRA fine tuning
11:29 - Use AI supercomputer infrastructure for your workloads
12:34 - How Wayve is leveraging AI supercomputer infrastructure
13:47 - How Confidential Computing works with Azure AI
15:21 - Wrap up
► Link References:
Leverage Azure AI capabilities for yourself at https://aka.ms/AzureAIInfrastructure
noIf a robot rips off your arm instead of washing your face (like a human, it's quite capable of doing either) who do you sue?It depends on the circumstances.
Am I the one who designed and was testing the robot?
Has the robot been tested and passed regulations?and yes to the rest. Now please answer the question.
Was it used according to designated functions?
Was it properly maintained?
It means that the regulations aren't strong enough. I can sue the tester or regulator. Or blame myself for trusting them.noIf a robot rips off your arm instead of washing your face (like a human, it's quite capable of doing either) who do you sue?It depends on the circumstances.
Am I the one who designed and was testing the robot?QuoteHas the robot been tested and passed regulations?and yes to the rest. Now please answer the question.
Was it used according to designated functions?
Was it properly maintained?
I can sue the tester or regulator.You might sue the tester if you had reason to believe that the test results were falsified, but your case would fail if the incident was outside the scope of the test parameters, or if it was within the scope but the machine had performed correctly on test. Regulation isn't a guarantee of safety or performance - its function is to create a common market.
But don't forget that human workers also have non-zero probability to do crazy things. They can burn down entire city. Who would you sue in that case?You sue the person who did the deed. That's the law.
You sue the person who did the deed. That's the law.What if they commit suicide afterwards?
You might sue the tester if you had reason to believe that the test results were falsified, but your case would fail if the incident was outside the scope of the test parameters, or if it was within the scope but the machine had performed correctly on test. Regulation isn't a guarantee of safety or performance - its function is to create a common market.Then I can only blame myself.
Who is liable for the actions of a system developed by countless others with the intention of learning and reprogramming itself, years after anyone who might be considered the originator has died or gone bankrupt?That would leave the users to be responsible for their decisions, just like normal investments. There's no guarantee that your investment will get to break even. I heard that 80% of start ups fail.
In the first instance, you (or your executors) sue their estate. There have been more complex cases where the manufacturer of a single product has gone bankrupt and the courts have upheld a suit against the company that supplied the raw materials, even though those were entirely to specification. One result is that you now can't use any DuPont material in a medical device.You sue the person who did the deed. That's the law.What if they commit suicide afterwards?
Design pressure is the highest pressure that a pressure vessel or other equipment is designed to withstand under normal operating conditions. It is calculated using a variety of factors, including the material of construction, the thickness of the vessel walls, and the safety factor.It doesn't seem to be aware of contradiction, as boldened statements above. This weakness needs to be fixed.
MAWP, or Maximum Allowable Working Pressure, is the highest pressure that the equipment is allowed to operate at under any circumstances. It is typically set by the manufacturer or by a regulatory body, and it is always equal to or less than the design pressure.
The difference between design pressure and MAWP is a safety margin. This margin of safety is necessary to account for unexpected pressure surges, corrosion, and other factors that could weaken the equipment over time.
Here is a table summarizing the key differences between design pressure and MAWP:
Definition
D: The highest pressure that the equipment is designed to withstand under normal operating conditions.
M: The highest pressure that the equipment is allowed to operate at under any circumstances.
Calculation
D: Calculated using a variety of factors, including the material of construction, the thickness of the vessel walls, and the safety factor.
M: Typically set by the manufacturer or by a regulatory body.
Relationship
D: Always equal to or less than MAWP.
M: Always equal to or less than design pressure.
Purpose
D: To ensure the safety of the equipment and its operators.
M: To provide a safety margin in case of unexpected pressure surges, corrosion, and other factors.
Examples:
A pressure vessel is designed to operate at a pressure of 100 psi. The manufacturer sets the MAWP at 90 psi to provide a safety margin.
A pipeline is designed to withstand a pressure of 200 psi. The regulatory body sets the MAWP at 180 psi to provide a safety margin.
It is important to note that the MAWP should never be exceeded. Operating the equipment at a pressure above the MAWP could result in failure of the equipment and serious injury or death.
Which is why every decision must be traceable to a human or corporation that can be held liable for the consequences. Except, of course, politicians, priests, and economists.Or when they are already dead.
A dead person or corporation can still be held liable for what they did when alive, and compensation can be sought from their estate or corporate successors When you buy a company, you buy its assets and liabilities.It's their heirs you're talking about.
I don't understand why Matt Hancock hasn't been sued over the deaths of those he mandated to be infected with COVID.
No, their estate.The estate is the sum of monetisable assets that belonged to the deceased person. The heirs are the people to whom, after all prior claims have been settled, some or all of the assets have been bequeathed. Whether a claim can be retrospectively prioritised is, I think, dubious, but in any case the essence of all legal proceedings is to file your claim ASAP! In the case of a corporation purchased by another, the liabilities are transferred in the same way as the claims on a personal estate.How can assets be held liable?
In this video, we dive into the strategies to combat hallucinations and biases in large language models (LLMs) in this insightful video. Learn about data cleaning, inference parameter tweaking, prompt engineering, and more advanced techniques to enhance the reliability and accuracy of your LLMs. Dive deep into practical applications with examples and stay ahead with the latest in AI technology!
Chapters:
0:00 Hey! Tap the Thumbs Up button and Subscribe. You'll learn a lot of cool stuff, I promise.
2:18 Tip 1: The importance of data
2:43 Tip 2: Tweak the inference parameters
3:30 Tip 3: Prompt engineering
4:02 Tip 4: RAG & Deep Memory
7:04 Tip 5: Fine-tuning
7:30 Tip 6: Constitutional AI
8:13 Stay up-to-date with new research and techniques (follow this channel! ;) )
Hey there, it's Dylan Curious! Today, I'm diving into the fascinating world of Large Language Models (LLMs), and I've got something truly special to share. Full credit goes to the incredibly talented programmer, Brendan Bycroft, who has crafted an outstanding visualization tool for understanding how LLMs function.
If you're into AI, language models, or just tech in general, you might have come across various diagrams explaining LLMs. But trust me, what I'm about to show you takes it to a whole new level. Brendan's tool isn't just a diagram; it's an interactive, comprehensive guide that brings the inner workings of LLMs to life. It's perfect for learners, enthusiasts, and professionals alike, offering a clear, visual understanding of these complex systems.
So, whether you're here to learn, explore, or simply satisfy your curiosity about artificial intelligence and language models, you're in the right place. Let's dive in and experience the mechanics of LLMs like never before, all thanks to Brendan Bycroft's exceptional programming skills and innovative approach.
00:00 - Start
01:26 - Embedding
02:37 - Layer Norm
04:05 - Self Attention
08:38 - Projection
09:58 - MLP
11:44 - Transformer
13:09 - Softmax Layer
14:26 - Output
This video explores the journey of language models, from their modest beginnings through the development of OpenAI's GPT models. Our journey takes us through the key moments in neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. It leaves us with a question: what is thought? Is simulated thought, thought? Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman)
00:00 - Introduction
00:32 - hofstader's thoughts on chatGPT
01:00 - recap of supervised learning
01:55 - first paper on sequential learning
02:55 - first use of state units (RNN)
04:33 - first observation of word boundary detection
05:30 - first observation of word clustering
07:16 - first "large" language model Hinton/Sutskever
10:10 - sentiment neuron (Ilya | OpenAI)
12:30 - transformer explaination
15:50 - GPT-1
17:00 - GPT-2
17:55 - GPT-3
18:20 - In-context learning
19:40 - ChatGPT
21:10 - tool use
23:25 - philosophical question: what is thought?
Human think fast & slow, but how about LLM? How would GPT5 resolve this?
101 guide on how to unlock your LLM system 2 thinking to tackle bigger problems
⏱️ Timestamps
0:00 Intro
1:00 System 1 VS System 2
2:48 How does human do system 2 thinking
3:33 GPT5 system 2 thinking
4:47 Tactics to enforce System 2 thinking
5:08 Prompt strategy
8:27 Communicative agents
11:03 Example to setup communicative agents
@YusufIMO, it could also happen to AI models. That's why we need to make mitigation plans.
Can an A.I. system lose it's so called Artificial Mind?
When it happens to Humans, we have Mental Asylums.
What if AI goes MAD?
Will it be put on Medications, or simply ReBooted or Deleted?
ps -
https://en.m.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
It already Hallucinates.When there's not enough information to produce a requested output, AI models fill in the knowledge gap with some random assumptions. That's what's obviously seen during early stages of the training. After billions of training cycles, it just becomes less obvious. Newer AI models produce less hallucinations. But when they do, it's harder to detect.
@Yusuf
But still...if a mentally Unstable mind is capable of atrocities/genocides...
a) I can Not help but think what a super unstable intelligence with access to The Red Button would be capable of?
b) How many times should IT be asked to go thru a Mental Assessment Test?
c) Weekly, daily, hourly or by the Minute?
(All it will take, is just One second, of Pure Insanity...The End!)
d) Anyways...
Do you feel A.I. should be given access to Emotions?
(human emotions)
:=]
i don't think it knows the difference between the Truth & a Lie.It is a language model, so it sort of doesn't 'know' anything. It is essentially a search engine with a vastly superior language interface. That said, if you asked it what the difference is between the truth and a lie, it would probably supply a fairly reasonable description of the difference.
I can Not help but think what a super unstable intelligence with access to The Red Button would be capable of.It has no more access to buttons than does any search engine like google.com.
But still...if a mentally Unstable mind is capable of atrocities/genocides...It can't go unstable because it doesn't progress. It doesn't change from day to day.
How many times should IT be asked to go thru a Mental Assessment Test.
Weekly, daily, hourly or by the Minute.
Do you feel A.I. should be given access to Emotions?Humans are not even given access to emotions of something other than itself.
AI is to be feared because some of it really is in charge of physical things. GTP is not one of them.Generally agree but this is something that comes to my mind:
It will not be guessing about the code, it will know.With adequate depth of neural structure, it can predict far enough ahead, which makes the predictions indistinguishable from certainty. If an AI model can predict every position in chess, it knows how the game will end even before it starts, assuming that the goal is to win. It would be different if the goal is to make the biggest blunder in the history of chess, which might happen to amuse someone.
I do Not think of You as a Legal Advocate promoting A.I.I'm not. I just try to predict what could happen with AI progression in order to respond properly, and minimize regrets.
ChatGPT is very good at writing computer code. It is already being used by programmers to save themselves time and it is widely thought software similar to ChatGPT will lead to a massive reduction in the demand for programmers within a few years.ChatGPT is based on GPT architecture optimized for chatting. Other usage can use customized architecture, or combined with other types of AI models optimized for different functions, just like Gemini from Google.
Other usage can use customized architectureYou're ( @hamdani ysuf ) are talking about what GPT may become and going on to talk about AI. That's fine but just so that we are clear when I said the following in post #112:
ChatGPT is very good at writing computer code.I am just talking about what ChatGPT can do now and what is very likely to happen over a very short space of time (it will reduce the need for human programmers).
import matplotlib.pyplot as plt
import numpy as np
# Define the range of values for x
x = np.linspace(-10, 10, 400)
# Define the function y = x^2
y = x**2
# Plot the function
plt.figure(figsize=(8, 6))
plt.plot(x, y, label='y = x^2')
plt.title('Graph of y = x^2')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.show()
GPT-4 surpasses ChatGPT in its advanced reasoning capabilities.This is found in OpenAI's website.
My Concern is A.G.I.That's also in the mission statement of OpenAI.
Our vision for the future of AGIhttps://openai.com/blog/planning-for-agi-and-beyond
Our mission is to ensure that artificial general intelligence?AI systems that are generally smarter than humans?benefits all of humanity.
Ask ChatGPT to create some HTML code for a standard log in page of a website that your building and you'll get similar instantly useable results.The interesting question is whether it is writing from scratch or searching for examples of the requested code. If the latter, you still need to check for any clash of variables, like using x to mean two different things or calling for a variable that doesn't exist, when you embed the new patch in something bigger.
The interesting question is whether it is writing from scratch or searching for examples of the requested code. If the latter, you still need to check for any clash of variables, like using x to mean two different things or calling for a variable that doesn't exist, when you embed the new patch in something bigger.You can simply ask it. You can also ask it to check.
As a seasoned ChatGPT user, you're familiar with the insights and assistance a single Language Model can provide. But what if you could harness the collective capabilities of SIX different Large Language Models at once?
Check Out Chathub and Change the Way you Use LLMs!
As a seasoned ChatGPT user, you're familiar with the insights and assistance a single Language Model can provideSo far, no evidence of insight, just uncritical mashups of stuff that's already out there.
Tell that to those who've just got laid off from their jobs, related to increasing use of AI.As a seasoned ChatGPT user, you're familiar with the insights and assistance a single Language Model can provideSo far, no evidence of insight, just uncritical mashups of stuff that's already out there.
Anthropic just dropped Claude 3, a cutting-edge model that performs better than GPT4 across the board, according to benchmarks. I'll tell you all about it, and then we'll test it ourselves!
Chapters:
0:00 - About Claude 3
8:35 - Pricing & Use Cases
10:47 - Testing
Timestamps
0:00 - Predict, sample, repeat
3:03 - Inside a transformer
6:36 - Chapter layout
7:20 - The premise of Deep Learning
12:27 - Word embeddings
18:25 - Embeddings beyond words
20:22 - Unembedding
22:22 - Softmax with temperature
26:03 - Up next