0 Members and 1 Guest are viewing this topic.
H G Wells (or was it Olaf Stapledon?) described a distant future where everything appeared to the time traveller as slowed down and torpid, though as the thought processes of the inhabitants were equally slow, they were enjoying life on their own terms. So whilst we can predict a state of total entropy on the horizon, I think the horizon will recede asymptotically.
So you never plan for the future? Little point in planning for the past, and the present is happening anyway. What a delightfully relaxed attitude - and so uncharacteristic of a man who claims to be searching for an ultimate goal! I think the West Indian adjective "mellow" is appropriate....fish in the sea, beer in the can, fruit in the tree, so be cool man!
Making predictions can be done using appropriate models and assumptions. No time travel is needed.What do you plan for heat death of the universe?
But what is the point if making plans if there will be nobody to see the outcome?
Quote from: hamdani yusuf on 09/08/2021 10:55:45//www.youtube.com/watch?v=ixIoDYVfKA0QuoteSelf-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.Even with self driving cars, accidents can and will still happen. And their outcome may be determined months or years in advance by programmers or policy makers.Solving complex dynamic calculations in real time can be difficult, and sometimes even give wrong answers. That's why contemplating about them in advance may help improving the result. We can make a list of some probable and conceivable situations, set the rule and standard for making decisions to make priority list. Refusing to make it would effectively let the decisions to be made by random chance. The video above at 2:00 timestamp shows an example.Is it morally acceptable to leave someone's fate to random chance? especially when a better alternative is available?
//www.youtube.com/watch?v=ixIoDYVfKA0QuoteSelf-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.Even with self driving cars, accidents can and will still happen. And their outcome may be determined months or years in advance by programmers or policy makers.
Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars.
Many exercises on moral decision making emphasize on mitigating incidents. Of course, they are more thrilling and less boring. But in real life, preventing them from happening in the first place is an immensely more effective to get desired results.
If we assume that we or our descendants will be there, our preferred goal will be one that would appeal to us in the future. So we have to imagine ourselves in the position of a time traveller. You do it every day: you have a vision of yourself at work or on holiday, and if the vision is appealing, you get on the train and go there..
Quote from: hamdani yusuf on 25/06/2021 06:40:00Quote from: hamdani yusuf on 25/06/2021 05:39:22Moreover, being altruistic prioritizes the well being of others rather than harming oneself. In your example, one of them can commit suicide, or go somewhere else to find another apple.This problem is better answered in the discussion about universal terminal goal. It would be like jumping in to the final step here. But I'll do it anyway, lest I'll forget about it later. Here is another expression for Universal terminal goal:The universe should be kept containing some form of consciousness, but it doesn't have to contain me in particular. I'll just call this universal altruism principle, because, why not? Apart from the universal altruism, there are non-universal altruism, e. g. Individual altruism. A gecko drops its tail to distract a predator, and save its own consciousness. Parental altruism. Parents sacrifice their own lives to save their children. Kin altruism. Someone sacrifice their own lives to save their siblings or close relatives. Tribal altruism. Someone sacrifice their own lives to save their tribe members. National altruism. Someone sacrifice their own lives to save other individuals of the same nation. Racial altruism. Someone sacrifice their own lives to save other individuals of the same race. Species altruism. Someone sacrifice their own lives to save other individuals of the same species. Genus altruism. Someone sacrifice their own lives to save other individuals of the same genus. Universal altruism. Someone sacrifice their own lives to save other conscious entities. The altruistic behaviors only make an evolutionary stable strategy if the one being saved is more likely to carry consciousness to the future than the one making the self sacrifice. Otherwise, it wouldn't be much better than suicide.
Quote from: hamdani yusuf on 25/06/2021 05:39:22Moreover, being altruistic prioritizes the well being of others rather than harming oneself. In your example, one of them can commit suicide, or go somewhere else to find another apple.This problem is better answered in the discussion about universal terminal goal. It would be like jumping in to the final step here. But I'll do it anyway, lest I'll forget about it later. Here is another expression for Universal terminal goal:The universe should be kept containing some form of consciousness, but it doesn't have to contain me in particular. I'll just call this universal altruism principle, because, why not?
Moreover, being altruistic prioritizes the well being of others rather than harming oneself. In your example, one of them can commit suicide, or go somewhere else to find another apple.
The first, last and only lesson of chess is that it is all about beating your opponent's king into submission, regardless of the cost to your own troops, whilst avoiding getting stuffed yourself. It is total war distilled into a non-contact sport. Not a good starting point for developing a moral code flavored with altruism.
Then make priorities. Don't confuse terminal goal from instrumental goal. In light of the universal terminal goal, seeking pleasure and avoiding pain are just instrumental goals. They are generally good things, as long as they don't obstruct our efforts to achieve the terminal goal.
That's our terminal goal. In chess game, it's winning the game by checkmating the opponent's king..................But if you see from the list of best chess games of all time, you'll find that chess players who sacrificed their queen are likely end up as the winner.
it is all about beating your opponent's king into submission, regardless of the cost to your own troops,
It is total war distilled into a non-contact sport. Not a good starting point for developing a moral code flavored with altruism.
The altruistic behaviors only make an evolutionary stable strategy if the one being saved is more likely to carry consciousness to the future than the one making the self sacrifice. Otherwise, it wouldn't be much better than suicide.