0 Members and 2 Guests are viewing this topic.
The creativity of some dreams astonishes me - occasionally they seem to have been written by an intelligence that isn't me, keeping a clever twist in the plot hidden until the last moment and then revealing it at the right time for maximum effect, but also showing that it had been planned early on. There's definitely someone else in here who can't speak to me directly, but who tries to communicate through dreams.
Like us, your AGI needs to be able to simulate things before executing them, which is part of imagination's job. The only thing it would be missing then is simulating improbable things once in a while in case it would pay off, and then I bet it would realise that it pays off often enough to integrate it.
That's what I think has happened to our mind while we were evolving from animals to humans. If simulating all the possibilities beginning by the most evident would have been better, evolution would have chosen this way and it didn't.
We will probably be able to build biological computers some day, so evolution could have done so too, but it didn't.
The way mind moves its data is slow, and it could probably have been as fast as computers if it had been useful, but there is no use to think million times faster than we can move, so it didn't.
Do you sometimes have that feeling about your ideas or do you always feel that they are yours?
That they always come from your own deductions and calculations for instance? If you do, then it is no surprise that you want your AGI to think like you.
When the right governs, it effectively works for the short term, while the left works for the long one.
Your AGI will behave as if it would know the outcome since it would never take chances, so I think it will only account for the short term.
If the right would always govern, I think societies would not evolve. I think it's the random wandering between left and right that produces their evolution. Your turn now, but you're not allowed to answer that nobody has to evolve in paradise! :0)
You're still missing the point - AGI will explore all those random lines eventually, but it will do so in a non-random order, starting with the lines that are most likely to produce lots of useful results and saving up the least-likely-to-be-useful lines until last.
It's all just knowledge and applied reasoning. But I want AGI to think better than I do so that it doesn't make any mistakes. For example, when you move something north and then accelerate it to move it north east, an unexpected rotation occurs quite automatically because of synchronisation issues, and it's so counter-intuitive that it never occurred to me that such a rotation was possible, so I made a mistake with that a few years ago.
Random is stupidity; not intelligence.
Einsteinists still refuse to do that with relativity
I can usually see the entire route as to where the parts of a discovery came from.
Evolution has a goal to respect, the survival of the fittest, and that goal is not random.
What if humans told the AGI that they are not happy after a while, and that the only reason they find is the AGI itself? What if they got fed up that the AGI always wins the game?
I can't figure that rotation out. Could you elaborate a bit please? Is it a relativistic issue?
If evolution is stupid, then we are stupid and I completely agree with that!
But if there is no way to build anything intelligent from stupid things, then your AGI will also be stupid.
For the moment, there is no advantage for them to change their minds.
Simulations and logic are not enough, we need to make new predictions and experiment them. I had one about the mass of particles accelerated separately, I predicted that they would not all offer the same resistance to acceleration due to the randomness of the changing process.
The fact is that we can't predict that kind of future, and that we should acknowledge it.
The only way to predict the future is to force people to do what we want,
and it generally doesn't last long. That's why I think that your AGI won't work. We don't like to be forced to do anything, including being happy, so I think we would revolt after a while. Did you anticipate that possibility? And if so, how would your AGI react then?
It's very simple. If AGI stops doing the right thing, people will find out how much suffering it was preventing and how much happiness it was making possible. Every time they ask it to take a break, lots of people will die and lots more will spend the rest of their lives grieving (and condemning the people who made AGI stop).
Evolution has no goal at all. Survival of the fittest is just a mechanism by which evolution happens.
The edges of the square are not aligned with the north-south and east-west lines in this frame though - the square has rotated a bit (anticlockwise)
But we're not - evolution is a stupid process which can create intelligence through a series of lucky accidents which get selected for with the innovations retained.
That's better. It starts to look like a democratic system.
What about having two AGIs representing the two directions a democracy can take, and let us choose which way we want to take by survey at the end of the year.
There is no other way than surveys to rate the satisfaction of a population anyway, so I guess your AGI would be forced to use them too to minimize displeasure and maximize pleasure.
You know your AGI will have your ideas about the way we should behave, so you don't see what I see. I would not agree with its ideas more than I agree with yours, so I would try to stop it, not just discuss with it, because I think that the way it would proceed would hurt me.
...half the population thinks it's better to proceed one way and the other half the other way.
Will your AGI know why we always want more? And if not, will it feed us until we literally explode? Will it know why half the population wants some change and the other not? And if not, will it nevertheless conduct the herd in the same direction until it falls down the cliff?
If so, then the survival of the fittest idea is also just a mechanism by which evolution of ideas happen.
I think you assimilate a goal to our will to reach it, as if there was a superior mind inside our mind that would know the right way.
You tend to attribute resistance to bad will resulting in poor analysis, but it's not bad will that is at stake then, it's resistance to change, a natural law that permits any existing phenomenon to keep on existing.
The relativists can't use their will to resist to our ideas since they're not conscious of resisting.
That's what I call our second degree selfishness: we care for others as long as we can imagine that they will care for us. So your AGI would still be selfish after all, which is normal since it would be programmed by selfish humans. You probably simply imagine yourself at its place the same way we do when we want to get along with others. It works as long as others imagine the same thing, otherwise it can go wrong quite easily.
As I often say, we don't change our mind by logic, but only by chance. Resistance to change is completely blind to logic, while chances to change increase with time.
You think your AGI won't resist to change while, in reality, it will be completely blind to our logic, and there will be absolutely no chance that it changes its mind with time.
If you are able to imagine such an AGI, it's probably because you already think like it. You say we should try to demolish our own ideas to be sure they're right, but I think we can't do that, I think that we can only compare our ideas to others' and try to imagine where they could interfere. Even though I try very hard to compare correctly my ideas to your AGI, I always get the feeling that there is no interference.
You can't convince me and I can't convince you, but you nevertheless intend to force people to accept your AGI whereas I don't intend to force anybody to think like me.
It's hard to figure out what makes us so different on that precise point. I can't understand how I could force people to do what I want and still think they will be happy. Hasn't science shown that coercion was not the right way to educate children? Maybe you've been forced as a children, but how could you think it was a good thing?
I probably misunderstood, did I?
If we had invented evolution instead of having only discovered it, I don't think we would call it a stupid invention.
It's nature that has invented the process, the same nature we are actually part of. I hope you don't think we are superior to nature, and if not, then I think we have to find a way to give it some intelligence, and the best way I found is to give less importance to our own one.
I know you're afraid somebody might steal your AGI or build it before you do, but it's not a reason to do what Trump would do with it.
Trump thinks it's right to dominate the world before others do, but we know it's just a paranoid idea that has never brought us happiness.
If your AGI would only be built to make scientific research, you wouldn't feel that threatened.
Maybe someone else is actually building one with the intent to rule the world, but so what? Let those people think that coercion is the way to go, and keep on researching how things really work.
Control induces control, so if you install your AGI, someone else will install another one to fight it. To me, that kind of software should simply be banned the same way nuclear arms should be. What's the use of developing more nuclear arms when we already know its too dangerous?
By the way, do you know the software called Mate Translate? It's so good that I could write my messages in french and have them translated. In fact, I don't do it just because I want to improve my english. If it's that good in russian, I could at last be able to discuss with Yvanhov, and furthermore, he could at last be able to read and write in english without knowing it. I won't be able anymore to use those softwares as an example of how far artificial intelligence is from intelligence. They made a huge leap lately, not just a small step. If they can translate that well, it means that they understand quite well too, so they're not far from being able to discuss with us. I wonder if they would be as difficult to convince as you. :0)
It isn't about chance at all. People follow authority. If you change the authority, they will change their position in a hurry in order to avoid being ridiculed by the new authority. The driver is crowd bullying - the herd is right even when it's wrong, and people willingly override their own intelligence to follow the demands of the authority.
When you say "our logic", do you mean illogic?
What you should be looking for is contradiction. Where there is contradiction, something is wrong. When something is wrong, the task is to identify it and correct the mistake.
Being forced to do things that are right is not a bad thing. Being forced to do things that are wrong is bad. No amount of the latter will make the former wrong. It's easy to show up what's wrong by reversing the roles. If you change your mind about what's right or wrong when you become the other person and they become the previous you, then your rules are wrong.
There are bigger threats than Trump, and he's right to oppose those threats and seek to dominate the world rather than having them dominate it.
that's also why there must not be any bias in AGI if it's to be safe. As soon as you put a bias into it, you risk it becoming a tool of genocide.
It's true that we follow the leader, but I think it's an instinctive behavior, not an intelligent one as you seem to think.
We like watching games because we can't predict the outcome, so I predict that we will still like it when they will be played by robots.
But since the AGI would be programmed to produce as less harm as possible, it would itself need to try something else next time it would be in office again. Can you predict what it would try?
By the way, how would an AGI know when to let us take its place? Would it wait for riots or simply trust a survey?
Politicians don't yet trust surveys when they tell them to go, because like any AGI, they know their ideas are better, so they inevitably wait for riots. Will your AGI do the same thing?
The people that believe in god find the idea logical, and many scientists even think so. Are they illogical or is it only because we don't have that idea that we think it's illogical?
Logically, it's because we don't have it, otherwise we would find it logical.
The only way to know which idea is better is to experiment them, but there is no way to experiment god, so it's a question of feeling.
Is it illogical to take a chance? That's what you seem to think, and you probably think so because you think your AGI wouldn't have to do so. But you probably do have to take chances sometimes, so are you feeling illogical these times?
To me, the idea of god is contradictory, but not for those who believe it's true, so how would they be able to find the contradictions. We simply can't use our own ideas to contradict them, we have to use others' ideas, and then they have to be similar otherwise we can't even understand them.
Let's admit that what's right is what will benefit others, not just us. Then what's right for the whole planet is what will benefit everybody. For instance, what's right would be to take any economic measure that would benefit everybody. I'm in, and I have my own ideas on the subject, but I bet your own ideas are different and you think they're right, which means that , to me, your AGI may not take the right decisions.
That's something an AGI could do without having to take over everything. This way, people would automatically invest in less pollution, less wars, and more equity. Things would go better, and there might be no need for an AGI to rule us.
With two identical robots, that skill will get dull to watch after a while and the audience will wander away, not caring which one wins by random luck.
We have the idea in our possession and we test its compatibility with the rules of logic, and we find a mismatch. That's the end of the matter.
It wouldn't let us take its place. What it could do though is agree for us to have our way with a reckless policy on condition that when things go wrong, all those who voted for it to happen will be executed for causing so many unnecessary deaths of the people they voted against.
The way to test God's compatibility with logic is to take the claims about what God is and see if they hold together logically. When you find that they don't, the idea of God is classed as irrational.
Do I take any chances? Yes, but they're chances where a gain is more probable than a loss and where the risk of a loss is not catastrophic.
If you and I have different ideas about something and one of us is right and the other wrong, AGI will, if correctly programmed, agree with the one who is right.
No bias should ever be added into it.
Good! You finally admit that your AGI will face randomness while executing its moves even if they don't contain any.
But I think that we wouldn't get bored after a while if all the players would be replaced by robots, and if each robot would be different.
We love seeing randomness at work, and we wouldn't if it was useless.
The fact is that any existing thing has to be programmed to care for itself first otherwise there would be no existence at all. Your AGI can't get around that rule, so it has to be programmed to care for itself first, a bias which is the root of all our biases.
That's what often happens when dictators take the power, they eliminate the opposition.
At least, your AGI has one, but it could still be dangerous to deploy it without testing it thoroughly. The problem is that it would have to be tested directly on humans, which would be dangerous for them, and which would contradict the program of the AGI. We may consider that it's worth sacrificing a few to save a lot, but not when the issue is uncertain. You may think the issue is certain, but you probably know that nothing that has never been tested can't be certain, so your AGI will know it too, and since it is perfectly logical, it would probably refuse to make the experiment.
You can't program your AGI not to agree with your ideas, so it will always do unless it has a bug, and it will also agree with those who think like you.
If ever you would start thinking differently, then it would mean that you might have been wrong since the beginning, and if you were only partly wrong, then your AGI would be partly wrong too. It's impossible to be perfect, so it's also impossible to create anything that is.
If it has the bias to care for itself before caring for others, won't it be able to develop all the other biases?
I went through the wiki page about biases, and I realised that they were exactly what I thought they were: they can be anything providing they prove our point.
What do you think of my way to understand natural intelligence? Can you relate it to the way you understand artificial intelligence?
* I use sense and intensity by analogy to the direction and speed a motion can take. I take for granted that the ideas that are made of words are meant to produce words, and that talking or writing is a motion like any other motion. This way, I can take the same two parameters we use to describe motion, and apply them to an idea, which becomes an information that serves to produce motion the same way light serves to produce my small steps.
Not so - if an AGI system costs resources that aren't available and people would have to die in order to maintain the AGI, the AGI may be expendable - it may be morally better to have to reinvent it from scratch later, and if it calculates that that's the case, that will be the course of action that it chooses. (In reality though, it will be no trouble to keep a copy on a flash drive, and it will fit on one, so there is no gain from destroying it.)
It isn't a bias if it proves your point - it becomes a proof.
I don't think there's any true randomness at all, but there are plenty of things that can't be measured adequately to make perfect predictions, so there may be enough surprises to make it uncertain which robot wins which points, although there may be such an advantage for the server or the returner that the winner of each point is known before the ball's been thrown in the air.
I don't find randomness interesting to watch.
In a phase where innovation leads to new ways of playing the game and new ways of winning it, then a lot of interest is maintained, but once you get to the point where they all have the same power of AGI designing every aspect of the build, they will all become practically identical, and then it gets dull.
Removing good AGI from power is equivalent to turning off every ventilator in a hospital, so anyone who wants to do that on the basis that it will make life more fun should be allowed to try out that experiment only on condition that when it goes wrong they will be executed.
In the case of AGI, we will first have it there providing advice which we may ignore. The more we ignore it, the more we will see the score go up showing how many people were killed by our bad decision.
I don't work to analogies - I simply program things to do exactly what needs to be done, and all the bits of code work together in a coordinated way that gradually adds up to higher and higher intelligence. The components are simple, but you have to get them to work together the right way, and while analogies sometimes point you in a useful direction, you extract the useful idea from it and then apply it in a way that directly relates to what you're actually working with.
What I had in mind though is a riot where half the population would want to kill the AGI like it often happens to dictators.
...as my selfish logic, which seems to be easier to program.
Usually, it's the server that gets the advantage, and I see no reason why it would be different for robots, but that doesn't mean that the server will automatically win the point though.
I'm pretty sure that watching them would be as interesting as watching the two best players in the ATP.
Then you shouldn't like watching tennis or any other natural phenomenon like cloud forming or water waves for instance, but I suspect you do since you like boating.
That's precisely what I was telling you about the AGI. I said we would get bored after a while since we wouldn't have any challenge to overcome anymore. It's not true though, because as for two robots playing tennis, nature would always find ways to elude the AGI's certitude.
That's also how laws work: they promise us a punishment if we get caught. They account for premeditation though, which is the knowledge we have that our decision will kill people, which is not the case in your example.
It is not enough to tell people that we are right, we must prove it with real experiments, and in this case, there is no other way for the AGI than to try it, so it should be happy that someone tries it in its place, and thank him for having done so instead of killing him.
The only way for politicians to know if they did a good job is to run the election
Don't call it Nostradamus otherwise people will immediately refuse to believe it.
If ever the AGI would succeed to eliminate wars and poverty, I bet we wouldn't be happier.
To save the planet, we would need to stop growing for a while, but we can't.
Countries don't stop making wars until they get erased from the map.
Facts are for others, we're not part of statistics. We are selfish and proud to be. Every one of us, not just others. Will your AGI account for that fact or do you still think that some of us are not?
Their ideas would take the form of propositions, not certitudes: "let's take that direction for a while and see what happens" would they all say. Like every one of us, they would be happy to discover they were right, but they would know that chance had to be with them. They would unite with other leaders to make a better world instead of fighting them. They wouldn't need to cheat to be reelected. Will your AGI teach us that truth or do you still think that chance has to be eradicated from the universe? :0)
Comparing a 0 to a 1 is already an analogy, and comparing the good to the bad too, so your AGI does work with analogies.
but as the quantum randomness shows, perfection is unreachable.
but in LET (Lorentz Ether Theory) it is important to understand that it is not time that is slowing - everything continues to move in normal time, but the communication distances for light and for all forces between atoms and particles increases and results in a slowing of all clocks, which means they are unable to record all the time that is actually passing.
I bet we will be. You only have to watch a documentary about how deadly the past was to be grateful that you're living today instead.
I bet your AGI would do exactly what you would do in this case, which is subjective, not moral. If I can't choose between letting them wear their scarf or not, I can't see how your AGI could.
You probably think so because you think that your logic is perfect, which is not far from thinking that you're perfect.
No more free will...
Our instincts can't change, period.
We will go on making wars and spreading inequalities.
The key to understanding this is to realise that the movement of the mirror will make it behave as if it is set at a different angle from the one it is actually set to.[Any length contraction of the mirror in the direction of motion does not alter the 45 deg angle. It is of uniform thickness.]
but in LET (Lorentz Ether Theory) it is important to understand that it is not time that is slowing - everything continues to move in normal time, but the communication distances for light and for all forces between atoms and particles increases and results in a slowing of all clocks, which means they are unable to record all the time that is actually passing.[Lorentz realized the need for a local time and a different time for objects in motion (relative to the ether). That's why the (LT) coordinate transformations include time.If clocks didn't record actual time, why use them?]
The way things work in LET results in it being impossible to tell if anything is moving or not:He declared that all frames of reference are equally valid instead,[Motion is detectable. The issue is what is the rate (velocity) for a given object.][He concluded all inertial frames are equally valid.
Much more interesting though is what Einstein did with the nature of time, because he changed it into a dimension and in doing so turned the fabric of space into a four dimensional fabric called Spacetime.[Minkowski is responsible for that.]
[You don't know the history of Relativity, Lorentz or Einstein, and the rest of the paper is a distortion of the facts with added science fiction.]
It is a proof that his models are broken. Nitpicking about the wording of the introductory part is avoiding the issues.
Quote from: David Cooper on 17/03/2019 00:59:36It is a proof that his models are broken. Nitpicking about the wording of the introductory part is avoiding the issues.Same answers as usual. Do a quick review of history, and realize a few thousand years of human rule is responsible for the tragic state of humanity.
AGI will be like a superparent of all mankind.
(That question comes from the answer you just gave Phyti.)Why is it so hard to get people to recognise that they are breaking their own rules when the rules of mathematics are so clear and are so clearly being broken by the models?