AI Ethics And AI Law Wrestling With AI Longtermism Versus The Here And Now Of AI

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 103 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 44%
  • Publisher: 59%

Law Law Headlines News

Law Law Latest News,Law Law Headlines

AI longtermism entails looking at how humanity in the far future will be impacted by AI, along with what we should be doing today about AI regarding that far future populace. AI Ethics amid autonomous systems wrestles with AI longtermism.

All other everyday living thingsThe most common undercurrent about longtermism entails the weighty ethical issues that arise. You could assert that this is a fully ethics-immersed endeavor. Predictions over extremely long-time horizons are bandied around and we need to ponder morality aspects of today, morality aspects of tomorrow, and morality aspects of the far future. Tough questions are asked about how humanity will fare in the long term.

Longtermists are construed as people of today that wish to shoulder a serious and sobering concern about far future people. This is not necessarily as easy as it might seem. For one thing, the odds are pretty high that the action of today’s truly longtermists will inevitably be long forgotten and not especially known or remembered by these far distant people of the far future.

Unless of course, there are other people of today that likewise share your vision of the far future. In that case, you would potentially witness the reward or admiration from those around you now. Whether or not the far future ever knows of your efforts can be somewhat discounted since you at least gleaned recognition today.

We have nearly 8 billion people alive today . Imagine that either on Earth and/or via the use of other planets, we expand to 80 billion people. If that number isn’t impressive to you, I’ll up the ante and say that we could have 800 billion people or maybe 8 trillion people, and so on.From a sheer numeric perspective, some longtermists suggest that we of the 8 billion need to be doing today whatever we can to ultimately support those 800 billion or 8 trillion people.

The catchphrase “existential risk” suggests that we might do something that could lead to the destruction of all of humanity. Or we might fail to do something that would have prevented the destruction of all of humanity. We don’t necessarily have to only be dealing with the entirety of destruction. There are lots of other frightening outcomes, such as that we are still alive but become infected like those zombies in the movies and TV shows.

As such, we might get blindsided by AI. This could happen due to our veritable heads-in-the-sand posture of heralding or at times carping about the AI of today. We aren’t able to see the forest for the trees and be mindful of the far future. We aren’t allowing ourselves to step out of the weeds. Somebody somewhere has to be standing tall and looking out beyond the nearest horizon of AI.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in LAW

Law Law Latest News, Law Law Headlines