0 Members and 1 Guest are viewing this topic.
You really need to define "long term".
For a soldier, the only goal is the next ten minutes.
For a politician, the next election.
For the rest of us, a maximum of 100 years.
Our duty to subsequent generations is to bequeath the maximum range of choice and opportunity to our immediate descendants, not to predetermine their goals.
You don't need a machine to filter out garbage. Just ask "does that hypothesis survive a test?" and "who stands to gain?". AI can't carry out tests and doesn't care who wins.
Quote from: alancalverd on 08/04/2023 09:23:49You really need to define "long term".As far as possible to the future, as permitted by currently known best models of reality.
Quote from: alancalverd on 08/04/2023 09:23:49For a soldier, the only goal is the next ten minutes.There's a risk to produce war criminals.
There will be a lot of information generated and distributed on line
When AI models can think better than all human thinkers combined, they will be able to device reliable and accurate tests to determine validity of new information. They will care who will win.
A lot of information is distributed on line, but it is all generated by humans or machines instructed by humans to collect data.
Here is some online information.Putin tweets "Zelensky is a fascist and Ukraine belongs to Russia". Zelensky tweets "I am the democratically elected president of an independent country that belongs to itself". Write an AI program to determine the truth and decide who wins.
Quote from: alancalverd on 09/04/2023 16:00:24Here is some online information.Putin tweets "Zelensky is a fascist and Ukraine belongs to Russia". Zelensky tweets "I am the democratically elected president of an independent country that belongs to itself". Write an AI program to determine the truth and decide who wins.The AI models will evolve with time at exponential rate, by collecting information from various sources. They are not static algorithms. The sources will be examined, cross checked, and corroborated with one another to determine which ones are more accurate.
So if Putin gets all this friends to tweet the same, it gets multiply corroborated and becomes the truth, thus proving that Josef Goebbels invented AI and it is a Bad Thing.
They are weighted according to the results from previous training sessions.
At this stage, it's important to make sure that their goals are aligned with the universal terminal goal.
In other words, they reflect the prejudice of the trainer. The very opposite of any useful definition of intelligence.
Which is the eternal dominance of Mother Russia. Or maybe something else. Ask any dictator - nobody else believes that there is or should be a UTG.
Sam Harris & Jordan Peterson - Vancouver - 2Moderated by Bret Weinstein 06/24/2018This is the second time Sam & Jordan appeared live together on stage. This event took place at the Orpheum Theatre in Vancouver BC Canada on June 24th 2018 in front of a sold out audience of 3000 people. The event was produced by Pangburn Philosophy.
Quote from: alancalverd on Yesterday at 08:20:24In other words, they reflect the prejudice of the trainer. The very opposite of any useful definition of intelligence.Yes, initially. But it's still much better than random decisions.
from: alancalverd on Yesterday at 08:20:24In other words, they reflect the prejudice of the trainer. The very opposite of any useful definition of intelligence.
No. 50% of random decisions may be valuable, and the mean of all of them will not be harmful - the essence of democracy. 100% of a prejudice could lead to disaster. - the downfall of autocracy.
Our ancestors had likely survived by prejudice that sudden movement of bushes are caused by lurking predators.