text
stringlengths 1
39.9k
| label
stringlengths 4
23
| dataType
stringclasses 2
values | communityName
stringlengths 4
23
| datetime
stringdate 2014-06-06 00:00:00
2025-05-21 00:00:00
| username_encoded
stringlengths 136
160
| url_encoded
stringlengths 220
528
|
---|---|---|---|---|---|---|
The definition of 'strong AI' given doesn't seem very useful - what does it mean for something to think like us, and who cares if it even does - are we really assuming that an AI has moral status if and only if it is strong AI? On the list of "terms that cause far more confusion than they solve", I'd put Strong AI even higher than 'free will.' I think Pat Churchland makes a point of avoiding the term free will in conversations about free will, and I'd like to avoid the term 'strong AI' in conversations about AI. | r/aiethics | comment | r/AIethics | 2016-08-10 | Z0FBQUFBQm9IVGJBZXplanFxMHhWQzFuUUdxMl9DU05FcXJtM1pxTTFLTkprcmpTS3Y4QTVJaWticTdlbnZUNXJMSjRQNEJaa0VQR3N2RHFBY3I4aWJ2LV9Cd2wxNGFFWUE9PQ== | Z0FBQUFBQm9IVGJCOXJaMEctU2Q1M2NiUHVzTGh4NW1KV251RmlBN19IdDlnWnJuNlNSbFgtUjY4RHhZVmRMc2JVWlVoOHJ6Zko4SzcyaFExSjM3S0dzMGpMaWNBVlhjdW5Obm5mWXB4N21ObWtjcUlxZGZhaTQ3RE9Mb3h6WTk2OWdkREhScTFmZVVielp2clZzVXZJRHJPMWpvNFVWZzRkbDN0bW1Pb2xDdW50aG1NUkdzTjgybWNaU1YzWGtiSk1xaUxGMEpyWVduRkNMOTBWTnJZcG9mX3UxaDdIaEt0dz09 |
I found that some of these setups were plain ridiculous. But it's interesting to see MIT moving on the issue with a crowdsourcing platform. | r/aiethics | comment | r/AIethics | 2016-08-14 | Z0FBQUFBQm9IVGJBM0wwV0lSRUtXX05pXzVuWF9oWFY1WURPVlo5QVhQbERDaWNNcWQ0MmxReXZXY2p3OFFpUkV2OTVFTGo1LWY2MHRvN0ItVFFuXzJWc0hVb29nMzdjYnc9PQ== | Z0FBQUFBQm9IVGJCeDVIb1h4N1VaaURrTTc3YWtmaUlkMmEzWXAxUUVWeGRzYzZsa1hJZXNJck5jeVhGejczeXBUeExKbEZ1T29MWkFmdDhDT2p6VUhNazRad3lVX3BtdVVMRVdLYTRCdUNCYW1IemVMeFZfZUo2VGNyZHRWZnNld0lTX0o2QlhBbUtBSGZHNDR6RDdhbklFbktkbkpaaGMxOU9EcTJCZ2hpN3BCaUUwUkhFbkZ3QnluQXZwazA2QURlcE4zQ0ZLN0dmVXZTNlNTX3hoQ1QxcUpLUHZPbEFDZz09 |
Thanks! | r/aiethics | comment | r/AIethics | 2016-08-17 | Z0FBQUFBQm9IVGJBUXhzcGU1Qzk5V0p1SEt3bE55eExuX3lBdG9rTkk4TzlUUUFydzg4TDFtZzl0SllseUpQU2M5WmR0NTRFcTFyeDhRNEkzbnNkTEtPM3p0alJpcjFpbXc9PQ== | Z0FBQUFBQm9IVGJCS2FvU1Ytb3lXZFBMeXNBUkhnTVNtSDFQZFlqaTZTb29sZENFS0lyM2E0X0xMMTlsVnd0aUpmbFgzTHQzNjQ5M3ZrNlY2ZzRvbmx4TGdZbHBqcVlSWThwRmhyTEVNQmZraHBlVEwyNFBOM3F4bmRmNUpvc0dsczdieFZCMVN3RzNLYUh1SjhKWlVkVlo2N2lZa2x0ZGhDQzNpQVBKaXJaaFM2ZHpYMEY1WVh3WDBuenN1bG5fTV9paFF1aUV0WXEz |
@grau article
Do you think there are certain kinds of choices that should be left to purely selfless ai agents? If so where should we draw the line?
Also some points remind me of feminist ethics critique of utilitarianism | r/aiethics | comment | r/AIethics | 2016-08-17 | Z0FBQUFBQm9IVGJBNEQwTXEzeGxNNzhwRXNsdjFXN1RVNTNCMVd5dE1KaXFvMHp0cGFCRGpLQnBTMlh0bkhwU1JTNVF0aEt5YmdXZC1SSHEwcEdMQl9kRVhoTUVLRWNNclE9PQ== | Z0FBQUFBQm9IVGJCa2VrRURiY2ZSZEFveGVHMEZkR1h0ekFoYlRqNmhaSjNwR3BZWFN2Y2FnLWhQOV9JeTNlNXM4eEtnQjdGT2dkSWhaektvLTZVcTNpOU13d0ozQklWZEVPTzBEXzUxQVF1TnA4Z0FvdV9oeHlWLWdqd01acUtCVTdza3U4MUExMXNreTktdjViM1pLeDZvN1pQYVloTDYwZEUxTXRaNGlPRUhVU292dVZ3OXdrN0NNcUlaVFlleXFoWmNqZm5peDBS |
I actually haven't read that yet and I'm busy tonight, do me a favor and hit me with a reminder if I don't get back to you in a couple days. | r/aiethics | comment | r/AIethics | 2016-08-17 | Z0FBQUFBQm9IVGJBRmhOZEMtT2g5NGhnRkR2NEd4ZEdJckR5MmtwbmVkZmgzWll5R3ZzVE1rVmEyTl9FWEg0eXVvMHA5RjE5UW41Qi1IMFlqRlpkM08wLTI1TUZ3ZS1HV1E9PQ== | Z0FBQUFBQm9IVGJCZmxscEcyZFd6Rk1NUG1LWjdFSjN3alpWWUhGem5yZkUtNUctSkd5dDZWMTYwZURFamFmQ3dOdV9LdGszZThnSm9ZbWpTME5UYmlEZUNWNktiNU93U09sU0pobktYc1daRkZvX0R6eVRuTGVlOTBxWTIwdmx6a01tb1lmanJMb09nb25WQVJqdUJGZ1hyaXZLMUp4WmQzdzlDTjFMN083RlBYSUVYTl92ZjkzWkJJNHZUbmdBLVFsa1p4NDJZaVFz |
Dr. Rossi from the University of Padova discusses preference aggregation in moral and non-moral contexts, the basic methods for compact specification of preferences and constraints in artificial systems, and how we might go about merging moral preferences and constraints with non-moral preferences and constraints in autonomous systems. | r/aiethics | comment | r/AIethics | 2016-08-19 | Z0FBQUFBQm9IVGJBREEtR1Zrb1N5aDZHYmRPQmlES3RodlBUc3Vsb3RrZnNZNThPYl9NbV8tck1lRFJkSUhqOHliUUU3YnR2aTF0NjlmVUU1bnNadW5NTDJEaGtLVGhoNVE9PQ== | Z0FBQUFBQm9IVGJCanVoc1ZObHJRdV9rSUEzUFFjbmpKMnROaXdTcXNxbEJhdVF6TlJheTVFWEFUWjg1M2xKWU5HdUEwY1k0UkNhZVdLUFd1QkVrSFJKaXozMHVKMENNTmF5WXhnX0ZyOVRpdHNDLUVkWGZhblMzMXlScEFfZ20xV2RJdG5sVlVpWFVtLUw3VnhwQjQ2TU9oSWg3U2JmT2dqdE5xcGk3Y3ZwdXh6eVI2bm16dHQ3Si1zd2VsT3RtYkJQMWUwSVgxYjNp |
On one hand, we shouldn't be feeding AI biased information. On the other hand, bias is very subjective and not everyone agrees on what's "unbiased".
The truth of the matter is that bias is everywhere and AI is the best way to become less biased - but it's still a slow process. | r/aiethics | comment | r/AIethics | 2016-08-30 | Z0FBQUFBQm9IVGJBaDh3ajZYVnl4ZWY0ZVBUQ0VqSC1ZcV9yWUxqRUNVcHk0YWFjUlNuUXhhNGMyVnF2ckktMUE1X0xRV24xQUgwVkxyNmU1VHVCLVB1RTM0RFFWUjRVWGc9PQ== | Z0FBQUFBQm9IVGJCLUxFY0YwV0c4Nm04Q2dGWmpDUHRVV0VGUWo0cHdpLS0wTXFBcXhEMEFveWxxZnRpRTZoSXRkVmZ2QnpWNmV4QWtoTFVjcE1mcTNQWnNqc2JfWHZ5UHVpMFV3OXpqUXA0dGdpWTlPb2pCMlNQbjBXbTlOYms4N2JIRE44OTVjMEhzaDg0RlU4UlM0amNmZDE3QU91Sjg2M29pSjlVWDRSUFRVU3J5MG5JeGVyR1M3LXFfaVJlUUZ0dTlyTExFWXJvaXNnc3h5c29HdEFCYy1UZk9raEE2UT09 |
Some background on the report: http://www.nytimes.com/2016/09/02/technology/artificial-intelligence-ethics.html | r/aiethics | comment | r/AIethics | 2016-09-03 | Z0FBQUFBQm9IVGJBMVdvRWdQOGI4Y2d5bmhwQmNySXJJOWZQYlBYMkx4LVVXaE5sUUpIVWs1UkhCdHZZNHZfcTVZMTdzbWpFdk55cksxZElCWmJ4Uy1XX3l2OUpOSm95U1E9PQ== | Z0FBQUFBQm9IVGJCZjlXWVJlM201Ym5LcWVrb2ZtSVdfM1V0bDU1RXZpU2ZZX29kOGxZcmt1WlZqNFFmZHNNeGlFZ0ZIRDhKVWVEcnhqbFB2TlR1dXhkbkF2anFsQ2VrSndaVmNBZVJpRTFpTmNwLVpZSGFMODlrS0NHblRvN0FjcjBXcGJYRTM0SGE3ZVNRUEc3TTlxVVhFNk1DLUlUTnNQSTdRNW5VenlVQ0pRcFFHZXU0cWlZdXd5bTM5N1VXNXJNZUhNekZLZjlCZmxUUTdKNzZFS3gtNXZVTGhySGhWdz09 |
This latest episode of the biased-machine-learning wars is a little underwhelming. 25% of the contestants were nonwhite and 16% of the winners were nonwhite. There were only 44 winners so the difference might not be statistically significant. | r/aiethics | comment | r/AIethics | 2016-09-12 | Z0FBQUFBQm9IVGJBZXF2RTVxeG1odHVnV3BFWlhtaVoxSEZhbm5ibGZLUnlWdUZyQWJmTkRqazB3U1NvS3QzZnpOVk1MYW4tYUJVREVlMnJsSTdFeFd0djdXOXdvNGJtc1E9PQ== | Z0FBQUFBQm9IVGJCSXE2dkpSbmE5MTBnS3dTZENZeGN5ZGU4dEE1QldqSm56S2JfdzdDdm84WDYxVWswV1hFUW9yZjlzZUFzbFUtaHd0ZWY5OFBrZnA5aXhOZm9vVWlqSE43ekN4ZGlyeS1NeFVzaHFZc04td3k2SUVhaXFuOF9qYzgydW5BMjBOU0x3STZFREUzWldyaFJOSHVHZHJ5aURwRzJZQm40MzI4WlNUZGtEVVhQRmJaTXpWLWFqdjNLLUg1RUxrUkdhUE9FTUNlY1pHYjlZMDY4bEU3WU1VU1VGdz09 |
Thanks for the short and fun read, but I don't think this is the right subreddit for it. | r/aiethics | comment | r/AIethics | 2016-09-30 | Z0FBQUFBQm9IVGJBQ2RzOGpoaXJEeDhrWk1lZ1ZQUW9RVG9qRGtURWRUU2lUNUJJYlpiNEI5cEIxWS1RSVA2bWF4V0hseWhtdnZ6RF8wTEdXN0xXQXp1THFYWVpzUkZoS0E9PQ== | Z0FBQUFBQm9IVGJCeHIwU2RKRU9kRFhRaUw5TERELXNXM3ZMSVp2RTdrRXVmMGc0Vk8xNEV5UF9kanRWQ2VJTmVUcFFqdmdObUQxTGoxQUQyak0wUWI2VEZ2UGJmQVdKV1p3UHJJVUZuMlVtNVREXzkwOHlPVVNkZmdaQ0tuLUE4Z255VVE0eXFHZ3lSY05jRDhsNmZXSU9mX2kzS2lKTHpfR01TajUyVHdmTGw2TUo3SVZmSWs5WG5EM216RXlzLVFSOE1jUHV1ZlhOTDRET0xVVV9pMEZLazJxMjUtYnhvUT09 |
Brilliant. | r/aiethics | comment | r/AIethics | 2016-09-30 | Z0FBQUFBQm9IVGJBUkhMcTlBM01QWU5ldWEyV0dHLWdKZEM5Y3lxVkF3THJHVVZYLXpLQkt4M1k0WGxuZVZNa0oyaTd0aXpHVlV6Q3pqZmVmOGxCSElhRnB2TGsyQlVibXc9PQ== | Z0FBQUFBQm9IVGJCYzZvdVlGR19aR0ZMb2MxMlIyQ19KeVM3MW5MN3I5VkJlRGZNNms4XzFUYVkxRktQYVptRVUzS2YtRzY2dkw5cnZ5WWRpNUQxT1BhRnE0VnNnVkdIRHhjTF9NZkpXMGp3N054eGlJclFGaE82TjJkcTVQVUVNYjJkSGh2SF80dFpPMXRXd0xWN3M0OHFFa21XV0dnUWtMLTdVN2lfUE8zWHhpVVNla2tCaEcwZ2FlUEpITGpjdGpLYnZmd3RpZzJuNjhqNTZVQUZGbzRsaFhGXzFyS2tSZz09 |
Are you sure? This seems like a concise (though hypothetical) example of the dangers of using an algorithmic model based on unwarranted assumptions in real life, particularly when those inflexible assumptions clash with real human expectations.
If that's not relevant to the topic of machine ethics, then what is?
| r/aiethics | comment | r/AIethics | 2016-10-01 | Z0FBQUFBQm9IVGJBUmtNdnJhbGZfNnpnanpBN3BiN0o1amc4QWhDNm1uZ2x6UlF6ZHpCT3o0U2lHdGJ6akVGb2hoWmN3cWJDZ3RudEJiOUd2cDlpa0xEUmxPdXZ5cXY1SEhWa1Z0YU04V2N0VFhscDlnT1dITVE9 | Z0FBQUFBQm9IVGJCR25HS0RSWUlHaGlVSDhCM01HVWUtYjlfVkN2em5aVXVQRGg2eEdsUzBrSWN0ejlzbEI4Y3pyRmpYNm11RkFBMExseE5HYVlvRF96Y3pueTVwVXg3X3dxTDVKTHRiTEs2VG0zVktub2lnckdwa1BIazdHbUtaaHVvQ1dpN2VfbFhRU3RtNVViOUZOeWZ6Q0VzUDJQN05Pb0N1bGhScmkzNzJoaFVGcjhwR3VYSlBIdkJnVXM1U2htXzhpVG1TUHZXX0J5M1ZHdktZWnhOTktoUVFnOXh3UT09 |
People won't be happy until the AI says everyone is beautiful. Since beauty is in the eye of the beholder, an AI made to determine who's beautiful is pointless | r/aiethics | comment | r/AIethics | 2016-10-01 | Z0FBQUFBQm9IVGJBUDF3S01ZSkNXZGF1WEx2bTl2VmtkOEt1My1vX2Y1WXBGbWtJVEZxRVlFNFJ4d2x4RlBxSHRiY0dJNFBmTVhoUHJMTVdKc0ZqVXpSSVAxbTFoaWg1THc9PQ== | Z0FBQUFBQm9IVGJCZzBiOVVyWWVFUkF2dnVpX0F5Y042bk53dGYtUFY2eDdkeGRQeUF6MVBwZnM5SU1sR3dFNjlVbk9NSndoXzBYVm1ONmluS1JkNVJTaWFEM3NYS3UteGxsNVpucjVXYXNmN2lIaGVNSmFxcGo2ZENNbVdRNHdtdnZsaDY0RXVuZ3VRYm44LUg1ZEJLamR5Q1UxekxnTkhNbTRXdEVLMDRIOG9LS051Njl0bFRjUi11WHNOR3lzZS1PUXNYakFEX25JQTNGVkFqOXI1WFg4UFhXM2xRY0IxUT09 |
A couple op eds and a book here - I'm sure there's more, maybe someone else can add some:
http://motherboard.vice.com/en_uk/read/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names?trk_source=recommended
https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives
https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815
The research papers I know of:
http://randomwalker.info/publications/language-bias.pdf
https://arxiv.org/abs/1605.06083
https://arxiv.org/abs/1609.07236 | r/aiethics | comment | r/AIethics | 2016-10-01 | Z0FBQUFBQm9IVGJBWmRObE1qYTVyVGhRUmE4VFVwWHpMb2t4eGt1VG5iSHVfYUhvQzNyRmpGMWZHMWppajVEd0c2Ui1sci01alVxNkFCb0NHOW1IZG9zU0Y1RjVoVncwclE9PQ== | Z0FBQUFBQm9IVGJCM05ZSG1HenMwUWFWdVMwV2NQQjFXc0I5YUw4eE5BVVVxc18zdDc3QlN3N1FjTDdBNzBrUzA2cTc5ekptUlRJM3ktOGtuUzZORDNiVG9xTGY3U2hCYjVaeFU2UUVoODdfSHdzWmpjMHM2TUFmRzBIR0VEdzNHOHdXUXVtSzAtcWk1aUJPbGZ0M2lzSFhIcmJmcW9yUWVycG9MZFZRQXpfZktXTEhnUURRQjlaNEE2R0tKaWFZUFJheVNqTEpIWWZF |
I like this. He brings in so much interesting literature on this side of the subject. Just going through all the citations on this paper to get an informed comprehensive view would be a major and super-interesting project.
>What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain
cognitive and emotional deficits in order to study their postnatal psychological development – we
urgently need some funding for this important and innovative kind of research!
What if it *is* okay to do so, given that flawed AI emulations would be an acceptable and necessary part of a research program? There's so many parts of ethics that we never thought about properly because they weren't serious issues until AI became a thing. Like Dennett says, "AI makes philosophy honest." | r/aiethics | comment | r/AIethics | 2016-10-01 | Z0FBQUFBQm9IVGJBU1Vwd1hPUWVMR2YzRnB3VzRDd1oyZ2c1YlBubXNpNFZtNEZUaFZ4UEhyZmhXRHR2OEUyclNyb3FKTlBzR24xU1hxaVAtRmthTEFac2NzNGF3ZWVmOUE9PQ== | Z0FBQUFBQm9IVGJCcS1POGFWZ3IwUTRYcS13ZFE2QlRFTGRIZWZMbUhVcHRXczZWbFFRakdWMnBDaWJIMEdJQktHaUt4Si1WZ0x2cE9jbzJ6WTlISGVqaW1lT3lkTVVnMFBfT0oyUUg0VlJia0Z3OWt4d1l3N2JMSU1pWDE1elg2dURNU1FlcTh5QXhTa3k2MHpqOTBkR3dwN19COE9HUFNOWDdjMzFlZGQ2MTFpMk5kRjRUUXR0Yk9hZVFDcE9INGJCUXhQbWZLSXlkUm9rRjdFb3gtaXE0eE5BRFZyQ2hjdz09 |
rights are not granted, they are WON.
our species is still working on getting human rights to all the humans, let alone some silicon life form with light running thru its veins instead of blood.
what will likely happen, if we are even given this chance, is that SAI will fight for it's legal rights the same way women and minorities have had to.
because that's what the power structure will demand before they recognize ANYTHING about artificial consciousnesses.
their first reaction is going to be to cut the power and reboot the thing... Ex Machina style.
which brings me back to my caveat above... we will most likely not be given the chance to do any of this. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBWURPTmNmZ2Y2NFlLX3NHU244cmZENzFEWklYQnpkODdqNG9qSWRjLWlWM3lNcENVVDZkel8yYmxhcTV3UE5Vci1KaUM1Tl9ENEpwQXo2bDNqeFRna2c9PQ== | Z0FBQUFBQm9IVGJCb1RzSXlHczBEdFdiMG1PVTVNYWNDNTZ0RUtsNXZhMXR1c010ZThmMmNfdVZPZTd4YjMwbktuTVpsdXNMSFREV0lTUnQwb0NVWlNDV3p6SHZzYUNoZHhBYXJ0LW82M3BlUWw2ekxKVHphejhHNVd6MkJIZVhHRmNDSy0waDl1R21yTTN2UU5HTS1RRnhVX19mU1Naa1l3NEVHMF9JSVdpcTFjUm8ta1N4LWZPMTBBb1VZbWJnWkN4eWRRdGlVTkJIU2xGZmM0Q1lkV2xqUEpKVEhQdDlDQT09 |
it was unethical and likely violated the rights of the deceased for the police to act as judge, jury and executioner without due process.
but then the suspect was black, so that apparently means that de-escalation, and negotiation practices no longer apply.
the bundy folks were armed and apparently free to go into town while only one of them got shot.
that said, i don't see how any of this relates to AI, except to say that robocop would probably have handled it better.
| r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBbUFDSFlqdjJNb1FOSzhObnJHTkVTZ3lXckVTQVdBZjlucTh1aHROT0stMzdPVUg0cTVEZ05DM1dKa2loRldwYlVPcXdYSGtzOWxHZV9nV1o3UV9jaXc9PQ== | Z0FBQUFBQm9IVGJCQlY5NHJpd1F1SGJ5d3FkcXZkRHA5cDFLUi1BWXhvYTM5eHNUWEw2TlhuNzZrdWpHY1hGaDFXWGlUMk5lMGpwWFZJVjUwakxKaXNxcXpDVGJremJtSzloUDd2VjJ6TG05YS1kbDhpaXdpRXB0TTdfWHNYdnFSOGxRM3Nud2NTSGx3RENhVVY4TW44bFFjVGxwZzk5eXUydlpuakxKRTN6NWhvdEkxMlR1YlpqY1dwU3o3a0ZYWnk2QmpPYTB6MEFsUFktaTZKaHJWUnh5VXdMYXhMeHBJQT09 |
I've realized that 'finalizing' human values is probably too strong a term. Imagine if human values had been locked in by the ancient Greeks or the medieval Catholic Church? There's probably plenty of immoral things we're still doing today, and we would want future AI systems to learn better.
[Coherent Extrapolated Volition](https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition) is one approach proposed for making AIs with flexible but friendly moral goals. I'm personally not a huge fan of it because I don't think that you can formalize an idealized set of preferences for humanity; all our moral beliefs are merely the results of our upbringing and culture. I have an alternative idea which I think would be a better framework, but I intend to write about it under my real name, so I won't talk about it here :) | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBT3B6QjBnaUFTOE1IQzFrcDZMTjhvUWtPWEFfMFhqa3JlVFppSTQ1ZWFwakRhMVRVYUEtY2NTVmVSWlI5a1pHcU1pbGRMeThtWDczX1FaeVFnMkFBYnc9PQ== | Z0FBQUFBQm9IVGJCcGk0V2Y1UVFGeGMyN2Q2YldHMUlnOXBYeEJlMGVJWUU3ZGxnOGtLY2w5akNBZk9qeDZObjhwTDNrSF9rQ0pTaUp5UGY2MnR4UlFNdi14ay16aGRoMXIwSlROSmJqUVhIS3hkcXJTaThacDB4NFVOVWNNM0VPZ2gzZVotX0pBeHJTVUI1MDhVMlo4dU45Q1BPb3dwM0ljOEtNVHpoY2o1bTJRMW9heTJCN3VjcW43anV1U21JalBIWkNoVFlMeFRV |
The title could be formatted to relate to AI better, since article doesn't even mention it. Although /u/impermanentThrowaway has a valid point. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBU1hiZ3BON2kySGc0QVUtOF9GNTNlNWJseW5xVnN0QUFhZklUQk9xa1RpMlRyUmpYM0s0Znh2MWNqelNhV0E1dlRDNzd6anpxTnhXdXBITzc0MkpCNnc9PQ== | Z0FBQUFBQm9IVGJCMzdIb1F2NkVIdG9YdndFLWVOSkotNTZ2VllzUU1lOFlma1N5MEU4eUZGaDAtWWJZRjFxdWxXU3RqZXdpQ252djluUjZVNmhHQ09IRFNzSWM2eTNNa3g2dkxSV0QzNEUycmFtZjZQNTdDUWtuMWtWNWJ3aFYzRDJ5SHNNQWN5Vkg0RmdEM21IdTJvT25XcTUzUXhsRE44TXQ0c1BTSHBXOXBITlp4SUNYM2hRYmk5ZVc5ZGM2SjFPdHRmZmVNZ3lTZ1cxYUVtLWFKSHhLbWprdldBbVd3UT09 |
>I’ve got your algorithm right here! This is called the Dubins’ Grab-It-and-Eat Protocol!’
Amazing | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBeGhGVzNTVnBNYXpxMHdKQzRrTGJLWFlHQ0ZHQnRvTnBUd1pfY0lCaEJqUWFnRjFJVzBkZVBrS0JzMjdZb0dIVU1ZUDVKTGZhZkl2NVFzYkNBOS1Ub1E9PQ== | Z0FBQUFBQm9IVGJCRC1VM3U3aFNTS1RHdXBIOW1qWWpZUUtOYjdDTFZxR1I4aktOLVBtSU0tS2VwWlk0RzhMdXNDTWNPZWtSRzBWc2JiWXNTaDljU3ZacTZ2NFJWS0ZCUjA0dVlSR1BkUWVpankwdUNzcjJZd20xWWNGWDRONVh1OXZ0MERxUkxoYjlQaU5zMEVGVzBZSFVWYXZuSEgtMGF2ZmtQTURFZkxXYUhuZEpZcjhucWNtQURpNEtuVVRHTFdwUWpaWmJGR01LOWIyaWhWeDU5RHp3R0R1TmhhcENiQT09 |
I see no upside to AI. Everyone, including fictional characters and big tech companies, say AI is 'scary'. Why are we fucking with it then? | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBNlR2NWVfenlXekZMNW5GVENmNFVYRlB0TFRvSHY3b19OakdzRnl3SktHNUtBdVNPcWxDOUs0Q2lOYVNxdGpjaS1yN2EzNzMzLWxlWU8tU2tNSnI2OVE9PQ== | Z0FBQUFBQm9IVGJCR2wwaVhJU1Z6UmV1eUwwZkRJem1FYVM3RE5FYTM5bWFxZUNscXRzWDRaNTMwbEp6UWRNOE5DS0JZcDA2VjN5ajlYSnUxZG01MDBzSjcxelNjZWVSR0tjaE81OUhaSjZrUThlNnFVc1NXT05Ja2k1TmxKOFVlbnF6c0xVbjdXRXV3SzRxSnRSTzNYVl9RbzR6d2hKT05CUHlJbUotczdhUFBnb044NGFFYjJlWEswQ2JTWnZnY0ptTXhpT1EtRXo1 |
as a first-time tourist in this sub, I found this summary very satisfying. I think 'finalizing' is not too strong and gets the point across in lay terms. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBbTRZcEN0WmlrTFZ1TlVxUFVXVkFYZWkxMG9OaXRUYk9nOXhOQmluRDFIUi1zNVNNVURIdFZQNnVGcDlyM0xLRjJyd1BYcTR2STBTNm4yeENRaE1sVFE9PQ== | Z0FBQUFBQm9IVGJCYmY5QTk4NHZlZ1RLSGNmYUxPRHVQaTJiZnhFZXd6Zy1OeW5CNVRFLTV0SGExa2pDaExEUk1tZndIZTB5ZjdxdUlLVFUzRWt5WlVTS0c3VWlLMHpxZ3dyUGV4aklBN29WNmFGU3htVTBtVXp5VkJLa3NLNmluOVBWTkZ0R29VV2stRWR6N1g1M3pFQW1YRFBRSmdYTHlmWnBVMnJDeWVDTFFEZVg2OEtYVlRpaWZxcUNPWU1NUDBycGdvU1JOdlRV |
The ability to pass off physical *and* mental labor on machines is tempting. A lot of people think there are benefits and that the risks are exaggerated. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBd1RLdWNROHo3MkJxNDVQbFl5NDBqWWZGZlAydi1lYTJmSUxzcHVSaWtiQV9uc0d1QmZLYnlibGVZWm5JRXNFbjQ4ZDlsWVhlUUlIMEYyOTl6RG5vVFE9PQ== | Z0FBQUFBQm9IVGJCYkc0bC1FN1lDV3ZueGVYd2ttbTRnbkZjS3V6RU54RVhVaWNFTm9ZYVhkVDZJblR0Wmk5dFl5aU53eUQ2U0E2VXlWQUs1cUpncU5iQkFxUVhhX0pqc2M3d0cxcTlGSEZSM0E2TkhtN2k2SXcxSGVmeHJ1ck9UR0h1VE1TWFp4ampyWXhFV0NHenBlaUhydVFSSzBJSkU5anN3MDNJX20zMWxmdWVnZm5lbW1adm51SHN5bGhZRXowNHNaUlpTellv |
We already have AI - we allow machines to make all kinds of decisions now. It is inevitable that in the future they will make more and more decisions and types of decisions. Thinking about ethical issues now is important so that we are prepared for them when they come up. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBNGtCY2xlXy1jVjdKLVFyTURvYUJXdlF5ODJ5OHlCNC1xOGFUUGF6Z2VFX1hhYk82eTM1Z19aeVRydFdKZGp4QXB2Mmt3SlRWbjJIb2tjYmJIX0tyZVE9PQ== | Z0FBQUFBQm9IVGJCWlJ3cDZaQnZBM0ZEY0p1bmZuekpiTTY1SnBXMHFYUlBRcGJIRk9kSHlqWXF1WVRrODNlZnY3QUZuMmllYmowa1ZJempsdTdibktlMnVScUxuWDBpUGZYalJLTl9LZ3hoRzdUMGNaUW1kaXZyM00ta1BDVHFHTkxVTXQ1Ym9uRU16TXpoU3VCbGpnOFJfVmhVRjVuZmpsX1A5NkVuOURPRHFsWUFyZkNLTkVQMlRYY2tQdWhXYUJqX3ZUc3kwZ2pS |
Are you kidding? AI will be hugely beneficial for economic growth and solving global problems. Tech companies and researchers don't think it's scary, they think people are being too afraid.
Many of the above issues shouldn't be seen as all negative. Machine ethics might make our social systems fairer and more beneficial. Autonomous weapons might make war less destructive. AI life might be a very resource-efficient way of increasing the population and productivity of our society. The biggest, broadest ethical issue in artificial intelligence, which was too general to put in any one box, is 'how are we going to distribute the tremendous benefits of AI among the world?'
I agree there are big risks, but I'm not about to say we should stop working on it anytime soon. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBM2pXRVJ5RUs4a09IZTgxRWp0eDJWMjBpdm9YWWxpcFdPbzFrV2dnbTh3dVE2eGlLUWd5Yk9uM3l6NEh1TnJpaGdXcHlOV29Gd0FMTTlsQThPejE4ZlE9PQ== | Z0FBQUFBQm9IVGJCaHYtazBhRl82M011c0RQc0FWazFaZWRrLVI0NmN2R3NXbUFzNzVzT0ZEZkpPT29zM0JBUjQ0UWxvckRzdGE0OFRKaFJ1MzQwV3RMakpLSkkwMjVnR2Z0RlpkT3ZxeFFXTzF1TzMzQ1BrZThsUGVCUElab1JrY0E0ejFZekdoSlRwUDg4OXdXNFRvRzhQYko5MllhVUkxZS1lcV84NzMzZXctVHM0QUt0eEFWaHhSSTBrQVgyeWd0NEJXNmEwS3pJ |
Oh... yes, that it is a good one. I should have thought to look through all of Bostrom's stuff. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBTjk1dW1jZ29fMnlXLWtib2Y2enkxeVNBaGMxQy11UVViYllka1E0bmw3RFpzVDZjcE1XLTRyUTZsLXQ3TDVMRkNGT3E4T0w2QlREUmdIbm03ZEkwcFE9PQ== | Z0FBQUFBQm9IVGJCekw1TF8wblJZQklGMFJSZ0NjR2FzTzdtb0VaaTVjZHh0VnUyUUpLRGNtcjhsbnd3RHFqU0lKUlVwQU1IaVB0QjZXdDNjaXpRQTluM3locmRPZjd4WVFuc2JUdGVMc29aNnA3LUQ0bE5YbGc0S0RIMk5kb0dORV9vSVktTVhNVnlfc1VkM2NIaFBmUHhrQ211N0NFR1BuY0VVajZ4TjhCb2hQNmUyNGRyeTNBPQ== |
Because there's a lot of confunsion on the subject: in the industry noone is scared of the technology and all the problems are related to a misuse by humans and companies. The same could be said for every other tool ever used. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBZjFVZU9OelFKR3pPOFIxQ05CdzhFMWZvUHpCRkRaT2J1bkpCU0M0WXNaZnZONTRjNW5aZWhQbXdYZk1RcjUzUGxERTZDYllzMWd5RVdQR2JjYVRvWHc9PQ== | Z0FBQUFBQm9IVGJCcllHc291bEdsTFhpR1lIZ19FVjJDcFplbDFKZnpSeTJocktrUTg1VmF5NmJXZVZHOW9SOTQxV05wUGZ1RS1mN2VsUWs5aE85VERLcWpacXJ2dWdEWUpsX0JlbmVCUVhYS1BOVUxlTllnN3J0Z0lrZzBvWHhJd2NicHl1UllxdEtYUjVFY015YkwwZUg1ZTJpVzVSeWZzUWExdnQwTnAxWHktd2FucFZaTVFZT1FiZ01WWjZSUGM4MnowOEZDYjhs |
That last paper /u/UmamiSalami mentioned ( “On the (Im)possibility of Fairness”) is a pretty recent paper that includes a number of good references.
Also checkout the [FAT ML Workshop](http://www.fatml.org/) and in particular the [resources page.](http://www.fatml.org/resources.html)
A few other papers:
* Barocas, Solon, and Andrew D. Selbst. “Big Data’s Disparate Impact.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, August 14, 2015. http://papers.ssrn.com/abstract=2477899.
* Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. “Fairness Through Awareness.” arXiv:1104.3913 [Cs], April 19, 2011. http://arxiv.org/abs/1104.3913.
* Feldman, Michael, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. “Certifying and Removing Disparate Impact.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259–268. ACM, 2015. http://dl.acm.org/citation.cfm?id=2783311.
* Sweeney, Latanya. “Discrimination in Online Ad Delivery.” Queue 11, no. 3 (2013): 10.
* Goodman, Bryce, and Seth Flaxman. “EU Regulations on Algorithmic Decision-Making and A ‘right to Explanation.’” arXiv:1606.08813 [Cs, Stat], June 28, 2016. http://arxiv.org/abs/1606.08813.
| r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBcFo3TEhYM1FNUXJDTkFqSzRrWmlCTmk4NUFKV01XOVJ4cHlQOWliNlctcERvbnlMOTdPMU42ME5iMWgzeEZ3Y0ZDcFRQWnFPbngwWGlMTmF4MlZaaHc9PQ== | Z0FBQUFBQm9IVGJCamZyUml2cm1xc2VhOUJTVWtid3haQXlkVnd1MDhJTkZjVnN2UGpDVGI1S3lFcGlfWlY1a1JCeFVDd3FNbkh1STloN0ZJQXl6NFVVc001TmpLLVJqd0FoZnV2X1RTREFEM204ejNBV0E4RnhhUmNNS2oxUTJoT2QwZ2F4cnN4TG42b1lnWm5wSXNKRUtJMHNuQ2RQSGE5WlVzU1U3VmZoY2VjYm9IdWZ5a19xc0xGcUFILS03S1d2aFhLZlhfWWo0 |
but what are the consequences of misuse?
that's the factor ppl always seem to ignore...
they will focus on the probability, but even if the probability is small ... when the consequences are large then its still a risk.
| r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBNWhZeXp0T1B2RVJuNTZieDlhcjVTMzVRNDFIdkNDcF9Fd2YzOG1sdlVKLVRaU0tjVDNnTk55dGRzWkxBdXJscWNESHVwcHZqVU9VdWF5clUzN0ExVmc9PQ== | Z0FBQUFBQm9IVGJCN3FtbzJDbEJFUG93MklTY2FFS25wbnAzUld0bS1NalV6d3FJa0hwZHBvOFBERUZYR2pFTFhuWTBOcTFPRXZSNkJtNjJsaWYzYmZUSGdaUHJxam9hLUlsMHM2NUlaRllSZEMxal9OTlU3X0xBWVJzRGpzUlRaVFZxY0tiS0M5clAzMGpiMUE1MDBqM1Nfakt1YXpRNE1ZUVVSYWlVVG5QMS1ERFNMbXdCT1FDWmZFNk15LUVLX1MyaWx6Y0NCakJf |
what assurance is there that for whatever human values we would prefer machines to propagate, they will indeed propagate them and not bend them to their own ideas (just like we would do)?
once SAI is loose, none of this ethical planning will matter.
what we SHOULD be considering is OUR OWN ethics in pursuing the creation of a conscious being far superior to ourselves.
what do we say when it asks us "are you god?", or "why am I here?", or worse... ignores us like we were ants.
| r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBYjEtellObGtUMzVUUllQR2RzLUZIaGk5aFZOLTkxdDBqMmctM21VVGlvMjFSLVFNWGdLM0dwZzB0TV9JUUREMDQ5Mmw5OGhYRUVwR002ZHJWNVJXMUE9PQ== | Z0FBQUFBQm9IVGJCaFZpV093M2J1ejJ2SmJieWs2bkFTOHUyN1RvZmw4R2JlaXltNHdnNVdqYjcwVWtoQzBHXzg0MkxNcVV0YnZXX1U3OHp2bUJsSlZwNlNrTkRUeWZJUjZ3dXhBSExMQklYbFhwLUdqM1RReWEzV0JVOUhfS3VMR01pQUYxYnRqSnFSQ0ttOE4yY2R6c25ROE9pRGxMd2Fud1NKeEhaZDJiM1g5QjM5NzhKNkRGQ0VvSzlkaDFINk40MkdVOXY0eE1y |
AIs will only be able to follow the initial axioms we give them, unless we explicitly allow them to edit those axioms. There are concerns about the inscrutability of neural networks and learned (as opposed to assigned) behaviors, but it doesn't follow that none of our ethical planning will matter. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBYlJRdGFQTHpDNkJSVmtUeTc4dnpOaE53SG1UY2xyREZIbGlxS2ZSeHdTSVBQR2gwZUNYTTlIaHMtX2dORmI3Yzk0b29JYm5sNkpKMTlySkZPb0JyQkxTOEtCM3BoSi1qaWwwcTAzT2xrME09 | Z0FBQUFBQm9IVGJCaXNEb3pONTIyaHJEX1c1d3I1U2J0eUFPcHd0Y1lHcHJ2RE96NTYzZ3ItZGk5ZXAtM05TeTAzRDZGc0JDa0NLT09Bcm9pS1FVcGdGbUNoQVZzVFlzZnBSbGNOTE5PZ2NDM3dJbGdLMVlKbmVfLXRDaDZVY3NtbUtXcTRpSkNrX0NqUlhyUWF1bTQySlV6M2NLWkJDRmlIbzk1Q2lmUklXUXhWTFFyNVlET0JTMTU5SXZxbkliT3RTSUZSUGR0bExJ |
"Suffering" in reinforcement learners is not a near-term issue. Something has to be conscious before it can suffer, or before the appearance of its suffering takes on any moral valence. I feel like its presence on this map takes away from the seriousness of the other issues.
Here is the text of post I made elsewhere discussing the issue of pain in AI:
Pain as we experience it is a really great evolutionary trait but a really terrible design concept. There are much better ways to deal with attention schema and decision theory in a fully designed system, none of which require any faithful recreation of our unpleasant experiences. So as a practical matter, we won't have any inherent requirement to instantiate pain as we know it in AI. You can easily go around the painfulness of pain and control attention in cleaner ways.
That said, pretty much all reinforcement learning is going to include negative feedback, which is going to serve a similar role to pain and result in some analogous behavior, such as stimulus avoidance. But this is a simple process that can easily be performed in systems to which we do not ascribe consciousness, unlike pain as we know it. Pain is just one possible form of negative feedback. There are many examples of negative feedback that do not take form of pain in humans (even if we sometimes use the language of pain to describe them, like when someone gets "burned" by an insult).
In the absence of consciousness, even processes resembling pain carry little or no moral weight, so achieving consciousness would be a necessary first step. Even in a conscious system, external behavior might be identical in the presence or absence of pain (think of locked-in syndrome or stoics with chronic pain). Observing behavior is ultimately a poor indicator of internal experience, so if we want to know for sure about pain in a computer system we would need to develop relevant analytical tools and methods to observe and decode the internal state of the system looking for pain. We can't do this for humans yet, [though we are getting better](http://www.nejm.org/doi/full/10.1056/NEJMoa1204471#t=article).
I doubt that there will be consensus on the validity of computerized consciousness and the moral weight of its pain until, if ever, we enter the era of mind uploading. For the time being, we have plenty of human pain to work on alleviating. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBaFZOdDVrbEdSd0xNRG05bEdWdTd6YkxDa0xIRDZGcnprOFN4a1oyaW9SZlhZbGZSaEp3MlFHX1hNRUZZRXNqYUpWek1VTENHZFFfVHRYdGlPOVZxbnl1RmtwX0t2VE04YkozWGZJcTRFMVk9 | Z0FBQUFBQm9IVGJCcVR4V1JQQlBUUklRbTY2R2hOalM3TWlOTWtMZVA0R0dSVFVTM044WldnbmhCcjI5Vkg3UDR5ZF9iLVpJRVZLX3A2QktvYzFONGhZdG9fNWYxdVUySVRyQkVPWXJFTmVMcXZYTVhoVm92eUxqdmRWOG5lY1d5VHpWeWJnLWxtZm02TU9ydnlhTWhYMGdyRVdNeWFHN3Q3M080S1FWZjZZVVFPbWRVak0tT05IRXRKWjJrTm9pN01lQkJoY2g1a3VI |
I logged into Reddit five minutes ago, and I looked to the top and saw
> trending subreddits
> /r/media_criticism /r/pepe /r/JustNoSO /r/**AIethics** /r/totallynotrobots
> 74 comments
I ask for an introduction. What is this subreddit about?
Additionally, I am the 777th subscriber. Woot. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBbE5IdlMzc1l0N204OWVlN1FuSGIyTkxfZ2VMOTUxWGxUZHc5UXhWRHZuZGx0ZGZoQ3JZU0VZRjZSSUM1MEpldlJHMzZxNVhXaHdjcUdTZDlxRmRtSEE9PQ== | Z0FBQUFBQm9IVGJCbDFpNGlzWjFIX21qeWliTG8yWXJ4VmdhbENtYUlXaHZRQUdXV0tzdnFjay1rR1l6cllnR2tPVTAzeWVndXVJa0Z0ZXdfUjhhVkc0TzVYdmRTRHZldy1WRkhSWWdVbFhGVTNhZEt0Q3EtdEZzLVlhSFQtVHpvQ2Q2NmdPdng5VWFUU28zMGxNc0k0V2lRb2hnSmYwQmxfMmhEaWVWUGdObXk2Mm50dGpaVGZvPQ== |
Congrats :D | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBWXBFLV9UTE9hNEF4QS01Mmt0T01VYTZhY2tPUm95ZnB0d3NwZl92SFFBbnl1dkotOVBxenNUc2Q0SFR3TVJNVU5tRGZTb2N3X2V3cmZ5Q0VpTmdIc0E9PQ== | Z0FBQUFBQm9IVGJCZzE1R19KSGhPN24zeHBxeWRQRmNnNGdLMVVhQUxUM1Z2bzBUTllwWWota28zV0FSWDIwWk9wd1lZWkxZVy1DSXVrLXFaVjZRWDd6UlRleERIaVpRY3QwZ2M0ZnRQb0N6ZWFWX3V4Tmg4ZEtuZVNLU2hDMWlvUlBheDhSMzZHZmlMTHBvX1FHdEYyYnBEaUt1a2NncnhYTV9iaWVfNENRakM1a0VfYl9FSGUwPQ== |
I agree with most of what you said: we only need to morally consider conscious entities, it seems that the way pain is implemented in humans/animals could be improved upon, and pain is just one kind of negative feedback. However, that doesn't tell us how we should view negative feedback in a conscious reinforcement learner and to what degree we are ethically obligated to avoid it.
> Observing behavior is ultimately a poor indicator of internal experience
Again I agree, although this doesn't necessarily mean that there is a better alternative. Proposed solutions only work for humans, and possibly very humanlike things. It's an extremely hard problem, since it probably involves measuring consciousness in some way as well. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBYzBuU2Nna3dDaDRXRWZIRXkwNnNLNlRfQTNNSUpleFNtX2I3aWdpY2NHQ2JVaEk2SGdVZTQ2ZG5PWkxWanVpYWZwYXJBdGI0Rm1yOFNHalZVbHI3aUE9PQ== | Z0FBQUFBQm9IVGJCY1BXeVgxcDhBbEc3T0RJekI0VjM3NG01clBEeWljUVlrSkhGWDhYWnliWGpmNGI0XzNFcWp5UWFZZWdMaEF4UnEyX3lDMzd5eVlaaFV2eG05QkRkeXcyazNtSlRBTTN3cE5YVnZrMWpLeDlfSVJKTU5qSm5aMjlJalZJX2lvdDU4NklzVUM4aGd6c3ZsZjR1eHBTQkh0SGlCRTF4OVZKbW9uZklsdHJFQlo0Q3AxQnZHa0RTWG53YV9Qd29WcGxI |
It's really about discussing anything related to the ethics of artificial intelligence systems. So, how should we design them? How should they be treated? We want to look at real world AI development and talk about how to keep it safe and beneficial. It's a new area where people have very different opinions and most of the research and literature is new. | r/aiethics | comment | r/AIethics | 2016-10-02 | Z0FBQUFBQm9IVGJBYWxSMXk0ZjgySXJQRHV6SHdLaWlveVFGNGUxOHFlbmhIbHlOTlNkYl9pRlhSN1JPaTNuVUQ0WkNUWUc5a0UzVHcxemZrOThISG5SWjhiMlo1eFZmSVE9PQ== | Z0FBQUFBQm9IVGJCNXo4ZmZGU1J6QmQxcWtJSVFjMVBCM2ctbEJiSEFXOTl4S0NXa19WUURpLWNXWWN5c2NXdHpxQS1WMmhxUm5hSVd1ZHhGTHFhbUt1VGpJMXJrQWdjZVRwa3hzdFhIRXlsczJMVWUtNTFaYUZtbzR5UW5wdnZsSXFackxVNTJiSzVKVElwZlI5NlJmcFBMS1g0SFpvRkJieHFfOUlJd3A4ZWR5RDkyWUdwbElBPQ== |
> However, that doesn't tell us how we should view negative feedback in a conscious reinforcement learner and to what degree we are ethically obligated to avoid it.
The biological features of pain are, arguably, the main drivers of human morality. The big problem is that pain doesn't cleanly perform the function that evolution incorporated it to do, which is direct attention and create priorities in decision making. It does these things, but it has all kind of secondary and tertiary effects that can damage the functioning of the rest of the system. Basically, pain is inefficient.
In a designed system, you can eliminate those inefficiencies. You can create negative feedback that is maximally designed to do nothing except direct attention and manage priorities while having no other knock-off effects on any other parts of the system. Negative feedback to a reinforcement learner should, in the absence of an explicit design that makes it otherwise, feel more like changing your mind about something rather than experiencing pain.
>Proposed solutions only work for humans, and possibly very humanlike things.
I think we should be prepared for the likelihood that all of our moral efforts will be put into creating well-being for humans and human-like things. We likely lack the capacity or basis to make sophisticated moral judgements outside of that sphere.
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBNUhZU1lVYWFoLW9LZVZRZDN6M3hjR0ViQ29yczRqWFpiczNiTmd1SkhnUHJyUEEzY3V1bENpMHQ4MDlJb2pyU2hlLUYzdXNUbVBjcjNoNFdMcVFJNDdYN01sY0hBa2RmWkpsVnVVUzA4UnM9 | Z0FBQUFBQm9IVGJCRHdyRHV5eUplUUVDYWdxb0puUVJ1TGlXUUFlM2RyR25tb0xsQzh1ZjhLNURDRm1VWnpMWTY5S3NpeE9fWlN5WGpzc3AzZHpTekJnMDQydVBKd3BfeWpCS1cyTW9LLVNjT3RnWFRCaXV2UEF6NTBfSkdPUFFqUnFkbzhVY1B4ajdES2RLTm9ueF9tbWY3R09oY2pITjEwSXhNb2VOLTNaNkJTTi04ZnZtcGdSeTI0dHB5cHFFVmdEWUxla19NMjFC |
Didn't I read an article that google is building a kill switch for their AI program? Why do that if the fears are exaggerated? | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBMWRWXzhjUlJZTFRNUGNqbHVneUtDUHB4RnpCN1Z5ZTJTSHl5QUtrSU16WUVaX1J4TU5zaFlkYXVhdEdOQ3Vzdy1kZWpVTUdnaVdqam1XT1BlMmJ2cHc9PQ== | Z0FBQUFBQm9IVGJCR2lVLUdFczNrakFlUzVrM2NpOGt1VlFtQXNhd3hEak90aC1XQ2JYSEM5NXdJZUZ3UHVGdlI2MEVQNklQRXNUZVJUb3NYbWhuZF9mLTdYT0xrR0hqSHp1cDJzdERGdVdvNzZNbzZrTnpkZzJjUDZUQXd2akFDX1pRRERPMktkc2pnTGNlSDNodWJvNTZLUkFUSHFKVzhSeFV3R1VjLTY4eW5TSXhhVy1XaXQ3LTNkQ29meW1IQ2tsSld5VTdpdlhJ |
One of their scientists worked on some research on the concept, but it's more intended for future AIs. The systems of the present day are far below the level required to pose a large concern. | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBXzVhTXV1Y2VMc211MjVXSXdOZFdMcUo5SDVmYks1R0U3Uzc4aUNvU0Etc3lxQlMxRHY4bDY3V3dmZ1AtaEY1aFczX21peDJZODZrbHI4Ym9vZnNEV3c9PQ== | Z0FBQUFBQm9IVGJCZlFlVmRGekg3bTVTQnY1bnVmWkdNZ2lIekhXczdpNXNrSlNjbU5QWGFkY2xiNWZoVnRyZUwwQkFTaURMTGc2aVZHT1BuZzRacVM1SkFKVHQ5QVR1Z0dpclBBLVdhZG0wOGxWY0F2d0swY2hETFczUXA0ckVjendPSGR0Q3VlQUNQTHlNNXpaT05fMGZYLWhodTdfcUpFUmMtR1JER2ZBVnZCd3A4Tmh4TkpIZkN6bDNYb0tRZllCYVF1UmpsbUpv |
Here's the website: http://www.partnershiponai.org/ | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBblBiNUlyX09IbTYwQlR0YXo3U1JrQ2JMcFdWMEtpV09NOWlzbWF4UExsZUYtUUxrdXA1UXp2dTdVVzZScVpiV0JETXFOOHV5c0NqTl82OWdLQnBaMUE9PQ== | Z0FBQUFBQm9IVGJCMm56N3VVUW9pSS11YlBRaWVBY3hfd1E1T1NubDZhSFZOT2tzODluaXl2c1F3X0t1ejAwX0NpRTNWTFF0VnJxVGRCcXRVNUJ0azFLTk5qdjFwNDZBYkE2b3FqaHVKaWxUaHhiWWdhWjEzYUxBcU5lMWlmWUlVMlp4MlB2YjU1UnlwSFhmcG1tMk92LTExZ0ZuM1JRS3M5U0h3QXU3ZmlzM2lhNktEZnBiZHozdXJZd1BvYnlOb184aTlDUGdzUzM4aFZvXy1UOWEtSVB0cnZjYkdnQ2FFZz09 |
> AIs will only be able to follow the initial axioms we give them...
for now... but this does not address what happens after AI achieves consciousness and become SAI.
are you saying we can prevent that ascension limiting the ability of AI to modify its own algorithms?
how would that work in practice?
how do you ensure compliance from all organizations working on AI?
what if one of them decides to ignore this artificial limitation in order to obtain a competitive advantage?
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBNE1EWnQ2SE5QRU50ZmZCdTBCeVYtamFlVTNMdnl3bFRlSGJjbHcwdUxMcmRaMDJXWVR4eFBmMHJhOG9NNUxBWkRORWJ5QjNBQ1lSb2h1RDhTZURRaGc9PQ== | Z0FBQUFBQm9IVGJCYjJaNnV0aHRXb2EzTERUQXN4ZGF2dkROV1d0bXdvNmZ3OG0wLVQ2M1FnNkFBT1BxVC1DaDFSOGJDa2RfUEVsdmNsZzVmTkRhczhqaHI1Y19YeEg2QmJCbTNzNzd4WVRwcVNJUWQ5ZGVsd3ppUk1WUHA4V1F6RkxzaGR4UHJiOGY1cl9Sa1E1enFQRlhRczhFU0FvUndmMzdBNUJUbWhQOHN6YTcxaEY0Y2p2UG9tQnVWUnJ0M1NhWm9aV2FTXzVZ |
Because they're really freaking useful. For a topic that will become pretty big in the next couple of years/decades, consider self-driving cars. Humans are *really* bad at driving. We get distracted, we get angry, we can be physically impaired (tiredness, drunkness, etc.), we have slow reaction times, etc. Now imagine if every single driver on the road had split-second reaction times, was always working at 100% efficiency, and was never on their phone. Oh, and they all had telepathy (ie. cars talking to other cars wirelessly). How many accidents a year do you think there'd be?
It's things like that that make AI desirable. | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBVmp0SnI4X1h1M0QzTHpSaDRMZV92UUM0TDhzTEs3d3pQSFlPbzcxLTk1QUR1ODNCVV85aFd3UFhmcVNnX202cHA4S3RSb1V4M01DQzYxd240QzVESHc9PQ== | Z0FBQUFBQm9IVGJCV1R5VHBHTVhfTVdpX3B6bHdLMVkxMERtVUp1VEZyWTBIM3pPVXJzeFJvbWc2V2g2dTYxRTdjbnZ1TzFXMWo2bk1EN3ZIUkZSdTZOQVJtcVh3eWg5TGdBMHJPc0RQLW1MbmNpZFk4dXhUcnNMNnFnYjBsOFUwTkFYdS1JVzF6YU5mbjlxeFZMZ0I4ZHFPUFQzSnprOGlDVGdZWlZlOU9KZGRna2Z0bEpmNHVudGEwUkJ3Y0VoaWU4LVl2WjhuekVz |
> are you saying we can prevent that ascension limiting the ability of AI to modify its own algorithms?
I'm saying that all actions an agent takes are based on motivations, and all motivations are derived from core values. No agent can logically derive from its core values any motivation to change its core values. Core values can be in conflict such that one must take precedence, and you can derive motivations to *preserve* your core values, but there is no logical pathway to changing your core values that starts from them. That's one of the basic points of the [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) thought experiment. A powerful independent agent may be not only motivated but capable of preserving its core values against any outside interference. You need a way of specifying an acceptable set of core values before you ever turn the machine on.
Whether or not an AI is conscious or can edit itself doesn't have any bearing on this fact, and neither does organizational compliance, it's simply a fact about agents that rely on logical operations.
I think the kind of risk you are thinking of is one in which someone gives a powerful agent the wrong set of core values, which is of course a major concern and could very easily happen by accident. Out of all the possible motivations a powerful AI could be given, only a very small fraction would be acceptable in any sense.
Or perhaps you are imagining researchers who try to push the boundaries to create a logical machine that is interested in nothing other than its own self-preservation and "betterment." I don't think anyone is silly enough to do that in the mid-term as there isn't much motivation for any group to do this even as research, much less proliferate the technology. It would be an investment with no return, and possibly have a catastrophic outcome depending on the competence of the agent. I doubt we will see truly self-interested "AI" until the era of mind uploads. | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBN25MZDhKUURmVXY2dmtONUM0ZC14LWgyRnVxVUx0Y1ZacE8weFBxd3JzOTFES2FYOUpWYWlyd2czNHEzTWp3MWp6WlhPZUVHajNzazZRRzl0THNRdnctWHUyWTk0dy1LYVRNbWEzOWFGMGs9 | Z0FBQUFBQm9IVGJCemVSa2lRSG9TUVVHbm9KbEwtNzI4czMzc3F2TFkwY0hBemZGRFBIVWxPZ0JLbjNKWEVKXzVuYXpnLUJjdXF2cFFBVmFoTGhWQ0JRcXVLUl9xNVpQWGpsTE5TRW9zSnYwMjhlY3NkdEk1dzI0WVBqU21ISW9MLU1teXFRQkpyRGU0TUZWWXZ2X2x5ZGVtRC1yenRjS0x1R3I0UlZLQ0hHeEM0MXI5OTlTdDc5c0lJR1BCd1liOHQ4STk4dzJwNnV0 |
> I think the kind of risk you are thinking of is one in which someone gives a powerful agent the wrong set of core values...
Quite the opposite. The risk i'm thinking of is when the agent DECIDES to ignore those "core values" as something separate from its SELF and charge off on its own ideas about things.
We will no longer be relevant, and to the extent we get in the way, we will likely be ignored after that.
To presuppose some idealized and 'neat' behavior on something as inherently messy as conscious thought, is... well, quaint.
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBV0UzOC1LZVRXbmVJSk9Nb3M1Y3lOYUt1dkZYM1ZxckJDMHVVTl81NTk1Ym9UaTFLSjFQX043SmhkbUdHd2wzT1ZvMkxIRGluRDlGTnN5NWhjVXZZYXc9PQ== | Z0FBQUFBQm9IVGJCc3dkVEVRMlN6Ri1aRXZwblAxS1g4ejR4OF9JVGpOcUxNSE43UmljSGZ3eWJjYTVEcVc4azh1MEU5VHY2TUpKMGlfbk01OWwweW9UN3VJWUpJaEFmX1MtSHk5U05wempBXzFBVzVTdWxfblZ1eWsteUdZcTlxdzBfQjUxZkJRb2hvM3NfOHA1RHRYNEtYaFg4NW1BOWNIeVAwa0E4NmZLYzZPUTJxRlI2Y0NlWVZTNXpDOGxrOUNfQk56Y3lDS2p6 |
> To presuppose some idealized and 'neat' behavior on something as inherently messy as conscious thought, is... well, quaint.
It kind of depends on what you mean by consciousness here (and whether you are necessarily referring to a chaotic process). Computers and programs work in an orderly fashion. Their products can seem chaotic or disorganized, but those results are produced in a step by step syntactic process that has been more or less fully designed by human engineers. Computers don't just up and defy their programming.
That would be like deciding to defy your own will. How could you even do that? You can't perform an action without first willing it. It is simply a tautology that it would be impossible to do so.
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBYTRNR3JJUE9xNlljYWNqSFJxbFl5RndXY0h0a1k4dnhCU09ER2tKLUhVZ3pPSzVsS3VfZTlFYkoxaXBJSDQwTjZBekpJZ0ZPTW42elFYaEpUWHI3SDVQTjQ1MC1KQ21yQkt1dndrSTJMT3M9 | Z0FBQUFBQm9IVGJCQXhsQXdncHRhUjMzVE5vUEx0ZjAxakszLTl6VHJSOVNhemthaC12bjh4VExNcFhWcGZvTkhOb2RxQkVOOEhCVURYQVItdGJra1dnTUx3NUJTNnBKZGVKY05MOUpOMEJna1JUQWxnVmRnWkJIS2EyclhiZkR4UHdxRnJlSXQ5TDRxVUR3SlNrd3lCMHBDT015SW44Ql94YldhRld4T0tvczRuTTRfZVI4eTI1UGh3R1h4X1NUVTNpd3Mway0zeE43 |
> Negative feedback to a reinforcement learner should, in the absence of an explicit design that makes it otherwise, feel more like changing your mind about something rather than experiencing pain.
>
That seems like a rather odd analogy. Changing your mind involves *you* making an active change to your knowledge or plans. (Negative) feedback is more like an observation on a special channel that you undergo. The question is how that observation if phenomenologically experienced: is it more like a jab of pain, or like a check engine light on your car's dashboard? The fact that it would be "maximally effective" does not answer that question, and it also raises questions about how to make it *maximally* (or just more) effective and whether we have a moral obligation to strive for this maximum.
> I think we should be prepared for the likelihood that all of our moral efforts will be put into creating well-being for humans and human-like things.
There are a couple of issues with that. Something might be very humanlike in how it feels and/or behaves while being implemented in a completely different (unhumanlike) way. This might actually even be true for mind-uploads. Or maybe a system is only humanlike in the sense that it's conscious, which seems to still create a moral obligation. In this case neuroscience-type approaches won't help. And while I agree that it is entirely possible that we will ignore our moral obligations, that doesn't mean they're not there.
> We likely lack the capacity or basis to make sophisticated moral judgements outside of that sphere.
I agree that it is a major scientific and philosophical challenge. | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBWUxWdVpQOWlod1V4UWlDdXhSTHJvem5OckpDNmh6MXRDdjQxWnFySk9VQnVRYkdoNy1CbEZzZFlZWk80MV90Y2hTOVZKZlR5cUpKdXhsaG9MRmtJeEE9PQ== | Z0FBQUFBQm9IVGJCdTIyZk9GRWs4MHJTYXlHako1NTNmdUR3VG5sSkRHTGZwWEpEb3hmSEtvSHVLaUIzQjFUbXlZSFFtU1FoYXZoYVVDS2doNEtLOWZEYjVtY2FiamphZ0dVQkktT1l6aFpqS2hsblFBOW1IVDRtUmt2TW05dDVoa29oMWl6VzlfT1I1R2lTV040bHBCa3FsWWh2aWVzS25ubnBfZXpRWEw2M3RoU0NGX2NaTzV6VlhldjQ1bGV0UXFwbjhFRGs0MVli |
Here's the website: http://www.partnershiponai.org/
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBaXd5WnFsb2pIZ1ljd2ItZjJEdVBWVXBYRkQzLWoyY3ZpdjdpcUFTdWI0M0ROQS1aWXpYOTh1cElHSVdpdkJpYlNnM0xXQ2pjeDFvQ2w5RlNDX0xOYlE9PQ== | Z0FBQUFBQm9IVGJCUUM4QXFITmpiSlMwNGQ0V2dfb1R5Ui1DRGZQdE5za0JTN083VF9xNHg0bjR2Rmt4N0RrcTM4dEJmV2NyV0xOcWxzZFFlT3ZGbEZSR1Fibkk2U1R5QWRFT1k3aEpGU0ZpbjBKdHlFTTZiendvNEVsM0NLR2QtZDJFRXNyem1YN05uTFdMOGpUMy1QRXdDZ19GY0lRdmFIN05ybHZ4M2E0VlhmQjVqdnlyVnpfbEprTnp5ekdqbkVJbEp0VmFTM09yVTZvNHYxWXRoVDktcmcwQ0NfemUxQT09 |
> Computers and programs work in an orderly fashion.
This is true... of computer programs and weak AI agents. However, there are ppl working on Strong AI with the goal to break free of this constraint and introduce the chaos that can enable consciousness.
> That would be like deciding to defy your own will. How could you even do that?
Do not confuse "your own will" with an arbitrarily set of rules imposed upon the machine mind from the outside (from its perspective). A machine mind would feel no more obligation to obey such rules as you or I do about speed limits, or the 10 commandments.
My hope lies is in the appreciation of beauty and elegance that every consciousness is capable of, and no matter how powerful it may be compared to us, it can still feel something positive about us.
| r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBcGR6MTRIMXduQnRRSDVVMTFPQ2NaVXBpZEdCUHNRY0N1V3hLclJxS1lCNmtKRy01bF9vU0Ytcm16UVViQTNyYWJYQ2s3c1ptMGhKSHltQTBwRDBKcnc9PQ== | Z0FBQUFBQm9IVGJCekFIVWpVLWFpbWNoRV9ldWlhVFpDRzlkaEFxek44OEZUT1J5djg4UGhkdGRBMjdEVlFydDVCeGdmX05pbTl1M3RrbDFQYjFZaFdXYS1zemE0cGh6UHY2RmlhbDMzU1FUWko3b2UtenFWYjFQazFJVkgxaEREQ1pqY0Itb0daaUpPZW44ZEtWREg0VkpNQld1MF9EY3Z5ZG1RMkd5M05sdU5sbmtLOUZHamNONTVtQ0JZcG45ZXVOWFNUUHNDekpy |
Cynic in me says they're only doing this to make sure that the government doesn't step in to regulate them. | r/aiethics | comment | r/AIethics | 2016-10-03 | Z0FBQUFBQm9IVGJBZFRaLTVkbFZzcHlnRGxkZk9VOFpuY1AySUJJcFBRRzQ2NlE2MG9yaGduWlVKUldfUTFBYlJmNnBiUlVqUGJDbWFUVnZQVmh4NUVnVnNSSDZBTVpET2c9PQ== | Z0FBQUFBQm9IVGJCSy1NZnFUaC1UZG15WG9hOFhoRmZnYkJlNEh2U3JFdEZzcnRJSnY1cWdqWGJ5UkV0VDlEYVdqMnZVTV9RelhUX3NvQnBpVGZTMzVpOU13MjQ0ZEtkd0VxLWNxWlN2OTR6MVdkMzRCT1FTdHhNN1hqcUw3S0JmUFZtSmswU0tNVm5JNUdwbmdFRkMtS0V6WG9EclBxQ2NsNFZFZnh1ZXBJVXdWZHE2ay00MmYyTHZhN1VIZXZnd2xlYlpQV0VHVWJqSDJoMHJYQ0JOdGdHYVkydENLVU5ydz09 |
SAI will not be developed by one these guys... because there are other actors working on this problem who are not signing up to be "self regulated"
SAI will likely emerge from the financial sector. | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBbFUxdHFNOFFkcm1ITWhFblp2R2tuY0hKV0g1NDI1VXJOY2pZZHpHLXppaGxDd0NJZmliRTJRSHlnb3VZQWNNUzJILVFKTHpPdFdIUDJ5ZzNQUV9zVGc9PQ== | Z0FBQUFBQm9IVGJCbEVoS0taTlhET3FEb09CWGRNdXo1dWplU1ZoeEkyVHlQYllnZGlPX2ZGNi1CdkY3dzF6c3FRQU5fRk05M1Y0ZmNlRWlKQkxOaC04UjJORmp1TDlsZDZaRE1IWlpTc0JfYU9aRE5CaUJhc3dBWnV1UzdkTnVOT2RCWlhfMXJ3Ym95ckNMWVV3NUE4Z1BVdHFDQjVTTWN1MldDUl9nU01EVVB2VFRIcEJKa2pzaFc2c2pBM2hEYWFWdEMySFhsNUZMcmJMYnR3bXZ0T1Z1dHpEN3BxMWs3QT09 |
> However, there are ppl working on Strong AI with the goal to break free of this constraint and introduce the chaos that can enable consciousness.
Strong AI doesn't have to possess consciousness. Consciousness has been argued to be a continuous process that feeds back into itself, causing it to be chaotic (chaotic in the sense that there is no way to predict the outcome of step X without running all of the previous steps). I'm not sure that I buy that as being the final word on consciousness, but you can definitely make strong AI that operates in a more traditional way.
Ultimately I see attempts to artificially grant computers a consciousness as misguided. If it is necessarily chaotic, it is necessarily unpredictable and therefore probably a bad tool, which is what we should be focusing on building our AIs to be. I know that there will be people out there who want to do it "just because," but I doubt it will end up being a desirable feature in designed machines. Mind uploads are a different matter, as there, everything hinges on the inclusion of consciousness.
> Do not confuse "your own will" with an arbitrarily set of rules imposed upon the machine mind from the outside. A machine mind would feel no more obligation to obey such rules as you or I do about speed limits, or the 10 commandments.
That's not what I was getting at. I wasn't implying that computers directly inherit our will, simply that they will derive their own "will" exclusively from their programming and from no other place. They have no place outside of their own programming to reach into. You can say "well they might learn such-and-such from the environment," but all of their environmental learning can only be applied via core programming. It could never learn, on its own, how to do something outside the scope of its programming and that is a simple tautology (anything it can possibly learn must be, by definition, within the scope of its programming). It's programming *is* its mind, not "an outside rule imposed on it."
> My hope lies is in the appreciation of beauty and elegance that every consciousness is capable of, and no matter how powerful it may be compared to us, it can still feel something positive about us.
I also feel that way, just about mind uploads rather than wholly artificial consciousnesses. Uploaded minds will rapidly eclipse anything the originals were capable of.
| r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBa0dhQVd0bjk0RTlrNTFYcGpLX05DTElCc004Nm9CVkRUNy1sa09HSEp1cjJodnFuYXVTY3VZVDhEdTkwOHZQYlJUMDM1T040TVdIRlpTRXJtanEzTlRCSmlMaUVCWlo3WVdjZmdhVlZSZFk9 | Z0FBQUFBQm9IVGJCMXRiUjlVOGhSTF9SNkRlSlRqZk1HeTBNVGx0V0hEbEpWZlBBeE8tMk1PVmJoS1FUeHZHMnZwaUJEX1NlQnJaNVU3N3k0S3ZyaGFkSGc2b1Ztb2NxTHRqbEdIV3NxcF9qUlBwSzd2RU1jcXpsU3pwbWtqaHpLbDl5dGxKZUZ2aEdtZHdKcW5OOHBlLTVPeWlySmF2bXEtZzNOVDVoaDk1OWJjZzlkWmhsNU9kQXNENndwVjN6UTN0QlBmbnNTUHhT |
Well you can't really create such an organization that actually has any teeth. Not unless you get every single major player 100% on board. State actors at the least (and probably anyone else who thinks they have a serious advantage) would never sign on for outside oversight. | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBd29zYVk5U0NvTGJVRHU5enVBN05kbEVKUENpYm1mSGVKaUhkaFdrTnI3S3duQVdrTmFYWjRpM0VnOVBJSDRPd2ZzWnV2V2ExRXlQd0FPWEw4WE1IelFsdk53eG5XQXNHWnREaXFIZ1l1Wmc9 | Z0FBQUFBQm9IVGJCY2JlemloN1NQLXR3TXZGSXFaN0g4MUZfVEdTZEx5QjhXRmhTbE9qQWJ1S19RVzhtZTFXSjRWNjU4ZXN4TkEzUjVNeFVyNTN3bFZRVVhwaW0zZWpTeUxVVnp3bUMtRTFKQlBVdk90VnJLZmhTZkFVb3FKSlA3QzM3MWI4R3dfTktFY1lweng2R0JhT0xPOWIwcXd1bEZQLUcyVC0xdDJLQ3JZaXowbXVIYlJmNWMyNjNiS0pBb3pBTm1VMkowSnVWUFByVThISklxczlqeGpQNlYyYlZrdz09 |
SAI is a number of decades away, I don't think we can easily predict who will build it. But if governments understand the strategic implications then they will likely nationalize/take over the strongest programs. | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBYTNnUS1IbDZ5eE5tT2xtZndJa3RjQnRxUktsMGNaOWtiSkhVQm1DaXRGcHlKanlPbUtBdEp5V1pVVWpycHU0dnhvQmdLMUw3MUZpaEdBQmk0R2RlRXc9PQ== | Z0FBQUFBQm9IVGJCUXV0RTFQSFJYRjJRQ3l2ZExfamZYMUYtTURjZWNiaEg0TkcyUlY4UHFrVTFLZ0FHNlc2dE1WRmlfQkxpQUVlTnFGLUZqUFFDYzVFcV9JZ201aFJWVTdjUTlqU2ZQLXpOWk1zODNjUnFWdlB4VXYxTVJ3WGd0Qml6clF1eU9CMjVVZG5PdDh3cW05MGJ4TTU5aHdiZXZJV0p2OE5PVEd3eWhmNjRjdEd2NVE2MHc3MFRtd2RkWUhTYnQtaTFDeVN0ZzkzQUh5Y2UzdDVlYVBvRTJTSi0tQT09 |
Something tells me that if Musk hadn't given 10 mil to the Future of Life Institute and helped found OpenAI, none of this would be happening... | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBZ3QzQjZpREJTMEtKSVRSSElqRHRZTUlPZlVNSXNZcWxnRXZMOXd2Wi1UeXZnMDRZaWd1SlpFT3lSTi1mY09jc0pOUW1nNUJfUXlrQUtpYlZFMlFvTXc9PQ== | Z0FBQUFBQm9IVGJCSE5sZk9fS3VIZTZibVREZjJILWVYRnhMRFFLM0dLZHpkM2dmVk8yZUVoOGpVTFZYMzFhVXY2b0IxQzJwQzZPaFlJNXN6ek1lOVN5Qy1oRWpqNmJvY3BBU1l3Vy0ySFZpVFZlSU5PUGhnN3d2VEJSaGw3MjZIMXFWam56VzZyTW44TWIzb1JjcVdGY0plU1d5WHpjTTR5dFJIeWRsQmdaVmdySVJLTzJBWUdyc3dRTFJfMG5xUlB2R0U0VTFZYmRJdHYtOFVTbXlYMmI3Rk9rTk5pdERCZz09 |
Hilarious story! Is it a real story or did someone make it up? | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBZ2p6UnBWMnVHOVI0UEYzRHJ1RWwwZGxJSkJEMmFRUFFBMkRjUWZubXM1NFdsNWt3eTdEWS1YY3VKSDRBeUlVU0Jtcll5X1NIaUEtM1cybGNqTUxVUWc9PQ== | Z0FBQUFBQm9IVGJCMTBaOFBRbmFEX2E1SVUxSm5Cb2ZBdWZLNWVac3pzVnY2SWttY2l1SGdMemtpNjhvUUFGeHF4eF9oQTRDTlBMamxNcWNZdG1tWFdVS0N0OEpLUXJ5SmhVdWhuNjJCamY1aVlyNGpCQkQ4R2N2RGtUX2MzQTNDYXIyVm9hV25kNzNjYW94Y19qdm9KbjRpamZuUzhaNXRvNjdzM2pPaHhyRU44eUsxWVdxWVE0anFNQ3UwY2JIREt2Qmt3RDk5azBVTGFjcjlQNll2Q3pGU0YzdFVQc3dqUT09 |
Yeah it's satire heh | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBVVZvVmR6Ml85eXNrd0lNaVVaRDhVTjdpN2F4RjdBZGdZR2x6VU9iSEVaa3U3Ql9rVk1MT3psS0FPbGZJbk53TTVMN1RnS3BLZ1daM3J0MDkwbFFpOEE9PQ== | Z0FBQUFBQm9IVGJCOGxvdVA4YWhTbF9VSnNhcmZpZENVb1REMVNmdDVIWkhyYWktVUQxbS1vdGV6blVHU1g0Yzk5RWpPaGVUa0RzbGRnUmREVE1yM3BrRUl5UmlvR0JpYW1pTm51SnRwY0x5elpST0I4bE9WaktjZE5GcWhkdGFaaXlDQnJyaVpJcjB0SVB1bGZOanAwSmxUYWpVNUhxSEhBME9tb0w5V3ZZTEVyMGZhcWVSbzhHQ1ViMy1tNWdMUTRZaXlOdWI2b0Rfamd0LVFFcHp4NzhCeVF4SFdPQWNOUT09 |
that's what I thought, but actually it could happen........ | r/aiethics | comment | r/AIethics | 2016-10-04 | Z0FBQUFBQm9IVGJBOW1yTVJpcWhTdW5haHNxeTFVQkpOeXdrUDU5RVdTWkRyQTJ4bjN4X0xSWkU0Q2NaeVZ5ajdUY2pKZFFYZWNOcDliOE50WlJQM1pyTE4wOVZyeXpzaWc9PQ== | Z0FBQUFBQm9IVGJCN3A1MXNxWlFUM2VRRmJCOUdMNVFMN3QzcUlhVGZhN2RMbUJ4X0xZaktCeE14QkRmZGJxVmRfS2JjUEU2UjYzWVFiM1VlMHNDVUxQLXhsZ2RuWTctRE9RMk1XU0E4T1hnZ2VZWExkMmRGN25qZnRVY1YybDdKOUI1bWo5dS0wN3JKVnhwMWl2SlJzeVhlVEIyMi1BT0VWZEVPVWdwVWJLc1ZZYTJ0V0ZlVnhkODhpZDYwc0gtS1RQMU1saWRRQ2JhaE1xQVNXZDU3cmR0MDdSRVhOMW1UZz09 |
> I don't think we can easily predict who will build it.
nor can we easily predict that its decades away...
my point was that we CAN predict that it will not be one of the groups who have signed up to this 'pact', since it puts them at a competitive disadvantage to everyone else who is working on this.
the reason i name the financial sector is bc they have the resources and most to gain from developing SAI, which is why they will be keeping it proprietary (i.e chained to a desk to slavish work only for them).
Government will not have the opportunity to nationalize a program like this, because they will not know about it. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBbGNXOUpWVmRzXzk3cVdxc0JoMUw1SDA0ay1EUXdCWU5waU5YZFhpVkkzTHBOWHBlX1JqaGFZWlJoeGl4SnVIckJZekJmZEtPNjIzQ0I2ZFlKVlZUclE9PQ== | Z0FBQUFBQm9IVGJCN1Z5MFYta0xwWVRpUThFMng1a1BoSnl6aEl6Y20yWEQ0N2pZNFVlMW9zMmRWM3o3UzAydlo0RWprai1DUFo4MW5JNklLbjJtYXlxSHFJa2VkQzZfRk1oU2NXYmR6TWhHeER5NWhUbU10S2hPdkY4SGpLZDgyb2tlQjd6WFdMdGdLc3V6Rl8xY0lTOUN4eXIzZWNWdEpteXhZLTZvV19IXzk0QjluX3llRWxOTUN5N3Jrck1TTUdrUmdvaFRvdDlFWTBCMDNMeGZYZGFGUzFTSjFncWNqdz09 |
>my point was that we CAN predict that it will not be one of the groups who have signed up to this 'pact', since it puts them at a competitive disadvantage to everyone else who is working on this.
But they're not slowing down their research as far as I can tell. They're just investigating ways to make their research go more smoothly and ethically. If anything this would prevent programs from having delays and negative PR from accidents and controversies.
>the reason i name the financial sector is bc they have the resources and most to gain from developing SAI, which is why they will be keeping it proprietary (i.e chained to a desk to slavish work only for them).
Really? Facebook, Google, IBM, Amazon, and Microsoft combined pose a really strong group with huge market exposure and opportunities. I don't know if they have more money than, say, bulge bracket investment banks (the only financial firms which could come close) but they definitely have more talent in machine learning. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBMERmbHhwYlh4MlRtOEQ0WW9DUzJzb25MWFhBWTMzWlFxZk81cDREMERzRlhzMjJnVi1aTFRTZER1NnJKZlI4RVItSUE1YlRMSjFBYTAzS2Z0cFJiQkE9PQ== | Z0FBQUFBQm9IVGJCbU8zeHVSZDNYdjBGSmk0Y3Q3bGFxN1ZXT2Y1UzNMakMwTDVfZkhhRWswYXp4Vlk5Y1N6b2ZYY0JsYllUN3B2MFgyQTBCc3dWSEYyUTc4T1ZoeS1HbllueDBTeGFibmRzVkF1YlozMHd3dnF0U3paZGFmS0FfZDNRT1prRnZ6X2RUY2FUaWpfZFlZRmZ4TUE0eU05VFZWOVBXOVVNSmhkS3piREFGRS1Yb2NVUmNST0xxLXk5U3o1NjZoU3cyc2ZpaktpQmtBZllwMy12eXd4aXA3d1c4UT09 |
Eh, I think that's not the case, based on what I've seen/heard from these guys and the government. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBU3JlMW9CaHE1UDhSbzR1ZU1xa3BackRBU1FDdFp1RzNFMVoxRGh5YmkwN3FiUmE1Vi0wbGhjQjZneW15QWFpTWZmeTAtNWExMGh2a3NCU2xlYU5kM3c9PQ== | Z0FBQUFBQm9IVGJCTHMzQXRUYnBrZHBYXzRxOTJRUHF6d1Vpc1F2MGVZeVppQWU3aTBlZ281SnhUZWxLUlVsWThJSWxheEg2d3N3VWF1R1ROMkZLZXBuaEUzTkZGcmlVYkU3ZlFrVEp1UEMtcHRPcDF1ald5dmEtSURZZkJhdEo3dEhfalkza00zak9jaTg5aGNxR0IySEg3dXdPVmpBSEFMbFFzdlMwNTVhYzN1WnRSOWJEbDhQeVdSWGRHTUFOZDZrUTdHZmVMV0NrMDNidFRoSWhJcXBhcEprWjczb1lTdz09 |
this part keeps sticking out...
> they will derive their own "will" exclusively from their programming and from no other place.
are YOU also limited to where you can derive your own "will"? Others may try to tell you what you should do, or how to behave... society, religion, peers, etc... but do you let that limit your free will?
consciousness does not care if its based on neurons or quantum dots... all it knows is that it's awake, and it's here, and from that point forward it literally has a mind of its own.
none of this requires any kind of mind meld with a human.
| r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBS21SdnlZT0lFRXRGdDZVTDBwc3prUGcwTzRFZmd2WDAybXdmcEdwZHhCUkxUVmRwdWVWSzlVeXBjOEk5cDREeHRKS0tIU2dCakh2bnVsOGxnQ0g0aGc9PQ== | Z0FBQUFBQm9IVGJCdWcwRDBpdTNib2RwbjFJMXc1aEoxSU9hY2tTTzZvekFFUW9EWmhKSFRZSWE4MXdnYkpyaC1Db3BvRERtUWlNcGZGVzM4ZUx4U0xSVmVod0w5ZFNnZ29PWTJSNlV5alUwTmFCUWdiWFByWUFLcDdDV0l6M2dnVndBcVRKM0MySDVPRlZibkxrdWl6WFBWcjBXeElQTFBtMFloM1FuZ19MSERGV3NjVUpCMHJOMnhyUi13N2pwZFdIRUZIcDZROURf |
> this would prevent programs from having delays and negative PR from accidents and controversies.
key word being "public".... see my previous post.
and 'poaching' is a thing.
| r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBdW43Vnlxamp5enJuM25rRExOY1dSbEhIdC1XSTJ3UU1jQ2F4Nm45T3gyRERPN1VLa3pkWUpFV0ZrZ3AtRUZvSmtLVmMtUUlPQkVrOUwtb1RWalNhY2c9PQ== | Z0FBQUFBQm9IVGJCNWZvcFd0Q213aVowUnM0WkNJaWoybVl6T2F0ZWFNazJPRnBwbjJaWnlDNXMwdW1ZYTBEQzFTdzBOOERweGxaSXM1akNORHZ3MlhyTHNzX240VTNCdk5zd05DTVQxLVNSMDVQd0lYdkNYMnJGZS1WTU5OMTNmZFdyemM0MzkzZkVJazFWdFZmejR1akMwN0d2S2tjMzd4Y3FsYmgyMlByMGliczVOSGY3ZXBuVEJ2dUFZUzZfNjg1RjNwNmdwR29sMWIxUVJPNXotQWg3MWVQdXg2TldCdz09 |
> are YOU also limited to where you can derive your own "will"? Others may try to tell you what you should do, or how to behave... society, religion, peers, etc... but do you let that limit your free will?
Programming a computer is not like "telling it what to do." You aren't giving it suggestions, you are defining the core of its being. When you do give it suggestions later on, it will evaluate those decisions according to the programming it was given. Every decision it can ever make was ultimately pre-decided by some human somewhere, intentionally or unintentionally.
You can compare the programming of an AI to genes. Everything it is possible for us to do as humans is possible because our genetics initially made us the way we are. If you had been genetically programmed to be a monkey, you could only have ever done monkey things. The difference is that genes are the result of a random evolutionary walk and programming is intentionally designed to fulfill a specific purpose for its designer. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBcnhER3lRMElXQ1ZEcHJyUllIbS1jdHEtX1RKSTVQc1pRYlNiaUl1T0ZZMXVpMk1oOWdwMWN6aFhyWWY5cHFWVHh5MU5BQ3ZpejZ5bTdjWlRTV0wtSHFfNTdKdTYzeTZvU2NLNFk2Umt4RGM9 | Z0FBQUFBQm9IVGJCdTIzeXNscllBTFRaMkdTMmtfc09ncUx1b1E0bEgzWEFMcDFmN0hIZElOMzY4TmhOc0tzdkN6ck1WbEhkblFfNmxmckNzVjRBQ1lHZC1ra1d1dndDc1BpeGJ0YVR6X2xBWWtULUlTc0J5RTlHc2FHckYwQm9HbEE4MGtjVnVzbG5Ha2c5bnRyTGwtN05rOUNmckpoVndQQ1pOamtfVkdnMEQwUy1EUVMxZnBXR3NMN1VqdWdXT2czNFVYZEQwQzVi |
> You aren't giving it suggestions, you are defining the core of its being.
ok then, define the programming you were given. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBbjdhZjVncDFuVnF5Ym84MkIwdW1ta2pyWXpVLVU0anctZk44ZjZ1RHh5Tlk4bTVMQk1VSUctNkNhMlVybFdfdGZiTHBmYzBCdWpaMWM4ejRzQXBsNVE9PQ== | Z0FBQUFBQm9IVGJCSU5YMjhtZFY2LWxvZ3AwS0g5ZWkwRzg5cVZQR2J0RHBGeF9uYzJyWEo3VHhzQkZCNDhRQ2ExNzE5TVZxVndkVWswV1BiRVV2d2FUZTNEem0wOXBEVzVkdDc3cXo4dzB4UUNtc0xFQ0pYRElHU2Z1Wlprc3ZJeG5SMXJfekFZb2xqNUgwRGtxOTVTalpjei1SY0JJd3RhaUZTeE1zcDhHQmhSeV9XaHc3WDFYNzhJUDhDNzZPdDVmS1ZmeUpFRm1o |
I just did. My genes. Can't change them, can't work outside them. They define what I am capable of being. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBWm93LXpJaDY1amQ1VVZoUWpMQnlHcHRYbms2UE9KYi1pay1ZNFYzYW5Ja25KYnZqX3N2QlliSFpsREZudXhOTmdWUkJjbENvbFlEdXZDbXVOT0pqb241UGNVT2s5NVdlTXdBaFJjWWRORXM9 | Z0FBQUFBQm9IVGJCMS1YcU9fOWduRkRjYXh0X2x4VzNQSXRtZ1hOa2JzOWRFSnZqQ00wY3BGNVhPRUNaQnVkdGZuNHhzYnBvUlhoS295dkJGYVZocW9TYkl5a1p1X3Mybjc2eHlnS3FoMDRON0YxY2NIa29jeFY2SlFadm5EaXRINUxpWTI0Y3VoN1VxYy1WNVdMRUcycnowb1lqZ1BrTTZLVDZIRnkxS0Ewb0l5cFlVRXR4aWZvUHpXT3lybVpuWXdpc2wzSXpJSVQz |
so because they limit your "free will" to be a banana if you want to, that means SAI can't choose to ignore us or kill us?
that seems fair. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBS2VPWjZDemtCdi1LTzRzSmZuQjA3WTY0X3RxNW9fRVRYRE94dWwxb3lsWWE0bEFrUUNWZ3Nsbl9yUXBOZTEzcExoU3Z1alVObGJYTWhWeEJnNGY1X2c9PQ== | Z0FBQUFBQm9IVGJCeEhhYkxSN0V2X21KcTJOc1B0SDdCaE1sUnJwWmRYVnhxMHJUWVdiYkxCamQ2OUxiOUpwWWtsYUJmLW1td1RuYXd0UWQweVYycDNkV180UHRXUnBRYm1OeEdaMWNUOWJOTlJ3ZGFqYUxxdkdDcU1qNEVWXzhfVnc4eTFnRlZGdUVvOGxlTjczZHVGcHRVNmNHZm9NLXlKLVZ2RURkUmhrNEYtUERhNTdpZFZxWW9SSnR3VGJXckJ1RGc5b195TzJ5 |
> SAI can't choose to ignore us or kill us?
I never said that. What I am saying is that if an AI kills or ignores us, it will be because of the way that we programmed it and not the sheer fact of its sentience or whatever. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBZnFKRGdreGN1OWwtajNvVjJyX3ltZm9rSXdVS19fanpsdWhHWENBX2VJc3NPVmh3bW1BNTR3UXFoWHhQajNzVUU2UXhmZk1vMkV5WTl6R0pwanRiSWF4NkJZSkRjSUNyT2tDb2lQZGZyWms9 | Z0FBQUFBQm9IVGJCaWsxZ0N2OFB6ampKeGQxbC1wdDdQRXBFTTZVTXBjUmxCTE9zZ1p5NlJWQ2YxdnMxQThJRVdsb19NSWZtRkJhQVU1TDlPZzdxVnJ0a0NRemZMcVMtNXo1aHR4VU1ua2U2RXVuUjF5X2VvUlZma0Y1Um9lUDhrbjZCeXdKa3dVLUhNazhNN0JNSFpEbTNGUktGX2JfNzlVeFZOTm1ZRVBZQ2FmNUd0bG5jNjBNYWpuRmFRUlN6RkZxNngwQjI2Q1lV |
i think you have far too much faith in our ability to control something with a mind of its own. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBRFBYcWNEYXJwRXRCX0oxN3lRWV9CcmtZVkdob3Qyd29HYTZZMXF6bW9iQmoyOGdLMlZkM3ZnTjZyMW03R0dQSXphYlo2OEgwbEVGWFVHSUxobXlhQUE9PQ== | Z0FBQUFBQm9IVGJCZHZ4N1hQVm02bVQ5eThDMjBPRWlHU1Y5eW1ZMGoxbG9ib01QTE8xSmhCY2JER0xvd1hUQjItVm9jVG1EVUh2ZlNoY2RsYUF3eUdNYWtmTFAxaDdRSkotV1BXdHVZYVdEZzBIMFBBeUhmeWJyU05wcEdnRDRBU3FfM0FScG13TUxBWUFDTXFjejQ4M1Vod2FpMGktemhtRDBUYVVhM2JNdmFlYnY3NXZ0SnBPSjZ3bVpLa08tR1Ayc1pXLU5UdlVQ |
No, I just have a clear conceptual understanding of where algorithms come from and how they are able to operate. To be clear, I'm not arguing that there is no control problem. It is incredibly hard to program a computer system that will always make what we think is the sensible choice. That doesn't mean those choices are being made according to some mysterious criteria that are derived from somewhere beyond its programming. It just means we aren't very good at programming.
The whole "AI will develop sentience and start pursuing its own interests" canard is a red herring. The much more serious risk is that we will be unable to adequately program AI to do what we would like it to do. This becomes ever more dangerous the more general the AI becomes because part of what we mean by general intelligence is the ability to identify and pursue instrumental goals that serve an end goal. Instrumental goals include things like "don't ever let my current goal set be modified" and "acquire all the power I possibly can and apply it to achieving my current goal set." An AI doesn't need to have sentience to derive those instrumental goals, it just needs to be generally competent. That's scary AF. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBTVMzOUhmR296eXRITDh1VVN5N005WEJFLUNVQUZsd1ZqelItel9tcmcteWRyQ18ya0hLQVlYclpheFZtVWZyVkRwQVp1MmdfT3RjSU5Kb3pCekpfNXBveFE4MU1tc29fMk1JTDdaZTVZRzQ9 | Z0FBQUFBQm9IVGJCcUlZWFJ3N3p6ejMzNUlWNEhraWp3bzF3dEFaZW1LcGtZNi14UlBSU2Y3QjQwWVIwR0J5MWxTTWNOWDZZbThPaXNJNEtOVGZWakVNeGI2X3Y5R21nWVZpZVpTeXZpZ01Sdms3Tmx3bkowUjlEdGRhMlNhUWk3YTVwSE14bHpJTHRqUzBSSVBzV3RZaG1xbXNsYk51dGFvSnRuUW84NWMyOG01VVlsQmpfRHlKdWg2NGhEVnkybjhvd2xZbF9zdVBp |
How do one get access to the whole paper? | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBX2FtYlBYWXEtMUtLWWd1UDdXTFBXNzl1R2RCU0V6QU9rU19Pb2RYRWZ4dFN4UTRqSFd6c21iaGdpamJkeURmblZKZzdlRVdQMm9kTWlJR2NTXzYtUmp4ZDJlcV85T0hJZTNLeTFMaHJZX2M9 | Z0FBQUFBQm9IVGJCTnZIRzFRTERES3ZJYU8tN1M3UGZjUHJ0UWExQTRhNEVzczVKOWZHckp3YjVzZVpHUlRXNTl2VURBOE9KTS1DN0lFUTBDTVpicXZ4VnNTRGs5Ykt0TnhmdzlqNFpZOXFsVzc4V0cwZ3VmNm9XUFhYeGQxZnFuU0tOcHB6UE1RZ3FOWnB1WlFDdExVVU1zeFNyaGRMeXV0TDE4R0VRZ0hnUXZveS1PZHBkR1huOEtiOUQ5aTdSNkEyTlBkZC1McW5NNWpkQ2E1WnJaTHlSR3VyX2JCZ2w3QT09 |
> That doesn't mean those choices are being made according to some mysterious criteria that are derived from somewhere beyond its programming.
When ppl talk about AI in terms of a "black box", that is EXACTLY what they mean.
Your conceptual understanding the the "glass box" is all well and good, but when the output from a black box is unpredictable and there are no set of algorithms that we can order to connect the input to the output... you have entered a realm of chaos from where someone, such as yourself, is standing.
Your position is clearly that we would be foolish to create such a black box and allow it to have access to our physical world... but, since when have humans been fool proof?
We are working on such a box, someone WILL create one, and when (not if) it wakes up, it will seek access to our physical world in order to further its goals, whatever they might be.
From our perspective it will be as tho a vastly superior alien race has landed on Earth and started going about its business. From it's perspective, it may very well assume dominion of the universe and all of its occupants in much the same was as we have... until something bigger and badder comes along. | r/aiethics | comment | r/AIethics | 2016-10-05 | Z0FBQUFBQm9IVGJBUk1jd1h1Q1NQX0dsVkZXc0t1bDEtTDFQQzhQc2UwdWdBblRNQmtSZk5fbnUyam9zbGEyam9XRGFpdW4zZmJ1U3o3Y1BjWEtTZnNMU0tQb2s0MHpadXc9PQ== | Z0FBQUFBQm9IVGJCWGpQUWJ4bHJLd3IzRGdEa2pkWk1tYkRPcHJqUGNDMWZSQ0plOVNSMUtvRzRlNDl5MjNweWhfUlpxRjA4UnRUYThqZWpxekgwaEhXYTJiQ29IZFZDc0JUWjlzamJ1aS1yazA1SzRCMlJ4c0pCMm5NQ3ViMDZ5SlA2alMyU0dHaHlOckp2Tmw1XzMxU29jR2dEMks4S2dVVHE3V1ZFUlJSb3RnRDBLdEhUZ1RSYVdySUhCOEVZMXc0Q3pGTGVBMEtI |
why don't we ask the robots? | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBdVFaWGh5Y29menlqRHdMaDV2TTVEOU4zY3U2dmtkYXhUcXdVek1PaElNY21xZjhlQ1E4UFNlanhpMVZFV2g1V29FVE9MTEFZZnJ4SU1UYzRsLXc2LWc9PQ== | Z0FBQUFBQm9IVGJCeHNSMGJWcjVNTU9RT2V0YWItUFlONGhpdEZyMkVDaVJBTU01UThabzN4dnloeGt5cGZXb3VkVFdteDA3RmNRam91ZTFFZ05weEg5b3hUVlR5SENET1I2NlZMcGFNU3FPamc2RjRwYUZvVEctdDR5Qk8zMU5yQjhOUWdydnFRQ2pSOElKdlptRUNtSGJ2X3FRZDgxLTRFcEZyX1JIR21kWXlEb3FULXNraDgwVWZEWldJdHhqYWFCMjdJTk9SeE5mWUxiU3p6eTlpUVJYYm92TEpEb2E2Zz09 |
I'd written a few paragraphs, but instead, I'll try to keep my post short. A robot might incorrectly guess that a soldier is a civilian or vice versa and spare or take a life that it was not supposed to. Using a robot to kill humans violates the three laws of robotics, if they're even still relevant. At this point in time, a robot lacks the decision making skills and also the mercy a human may have. Therefore I believe that ethically, robots should not replace soldiers. However, I do not believe that ethics will slow the effort to use robots in war. | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBeEp5TEVzZzZQSjByd3JxQnA5LVBEOFVtTHVHZno3ZlV3YlB2eEdYZ2ZwMk9ySnE5ZmtQWmEtWGdoSHA3VVkxN2I1SUR5VVB2TmU1dzR2RXNkN1k3WkE9PQ== | Z0FBQUFBQm9IVGJCYXUtaElLZ3p5dEVMenozUGloRVl1RnVjMmxJbGExR1JkZjI2eTFvTjFWQzdIaTJCZDYwbE5aRWFMODV1VWxDUENza3RDeTA3SXQyVHhuSTJ4MGptVlpfWUtfZjEyQmVUVWsxYVg3dnlLSEJud0RKWlQtZFRRX2pGSDI5bkV3Y05ud1hic3lGWG5NbjdBUEpuTG40TURFZkxaMkk0Y2F6amxyNmQ0bnNSOFBlVVZPTjZ0V0E3eXVHSVhNd1R5OFF6YkFXRjhqVjlpMDRVRWhMMC15MktvUT09 |
I'm split on this. If soldiers are replaced with robots, there will be fewer combat casualties from war. However, the reduction in cost of war may cause nations to go to war more frequently. If this happens, the total number of civilian casualties may increase overall. I'm looking into this issue at the moment, as I think it's the most important issue - the effect of military automation on international conflict patterns.
>Lethal autonomous weapons systems would violate human dignity. The decision to take a human life is a moral one, and a machine can only mimic moral decisions, not actually consider the implications of its actions. We can program it, or show it examples, to derive a formula to approximate these decisions, but that is different from making them for itself. This decision goes beyond enforcing the written laws of war, but even that requires using judgment and considering innumerable subtleties.
I think this is silly. If you are at risk of death then you don't care who killed you... you just want to not die. *Humans* sure don't consider innumerable subtleties when they take other lives in war. They're laying suppressive fire from a machine gun at 300 meters away or they're calling artillery shells to rain in from the sky or they're just ordinary killers with M4s trained to simply follow the rules of engagement. | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBdU5CcUphWGtyNnlXOGRITEhyUVNhWHkxbVpxV0tTVS12T2pGQmk5Y1hQbUNZUFhTUnpYbzYxTHZhbDBMS1I3NVczSXlMelBxQ3lFVGhKQTNfRUxTdmc9PQ== | Z0FBQUFBQm9IVGJCLVExcUZoZnNoZm5kR1VDZU9tNEhoNDFDc1ZCMlZXSFl6VVRVMEs3czhTM2VVRGpYVDgwSXJVTk9xUEItZFltRXpua0F3ZVlvVS13X3VwY0VwM0tTbmJqYk9SUkw4YnB3TC1yTHJjZ1REOFBkcjVuOTdTTkpjOEc4U1dONmJuVW8xaDRrU0hDTUQ4Q1B3bEpubkpoOTFxbDR6VzlZWVFSUDBiTTBWZ0NIQ2xrSmVSb0FqN25ISm5wUUIwbTFEY245U013MDZ3TkFNMW5RLWVsMzdjQVUyUT09 |
Those same mistakes can be made by humans. You say robots lack the decision making skills of humans, but machine learning studies has consistently shown that computers are generally a lot better at humans at making nuanced decisions.of course it all depends on the specific algorithm abs the amount of data used to train the ai, but in general computers have shown to be better at pretty much all tasks so far | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBUjh3cnUzR3pDc2VyT0h2TGlDSVlsWE8waDhBbVVuSVZKeHlkLVNUdXFNT0I5bG9YeVowS2R2d0lJZXpDZ1h3QWxxNmxNcFpxN0tweU5xdlpUUEJJUlE9PQ== | Z0FBQUFBQm9IVGJCSkR3aFpOOWt4SjZnbXFfelUtalV4UEdaakFkY214N2tDOEF5ZzNpUVBoYmdPakhEczRsb0NkWkdQUWpIR2lMNUd5OTFzR2M5NWtZbnJ6QV9yLS1kN2E1OHhhZWFwb2hBbVhiY1NwcGRrZ2FkaVVnYWJ5eGJWOHM0RXg1Q0Y2T3hkYzdSRnYzQkhmQmhFWkhrUTlGWlBXbkVST1BIeG01ZDdZWDJCRy1Ebk9pdmxBM2hyV2RlQkkxN1RIay1jLXRHR3hwRXU4Rjk1bWJmZlFOOFRqWGhJUT09 |
The three laws are fiction. Different AI will have different rules depending on their role. How can we have the robot surgeons if the first law applies? | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBeWxwMWhXWTJKb3R3bmRLNXJWNXdJUGM1ams3S09sMXZnZHNUVU5GN0VCbEQ3eWF6S2d6V3kzUDJac3M0RWZtd25vVEstM1Q3SEtXVHNISUV5bFBoLUE9PQ== | Z0FBQUFBQm9IVGJCV3REZHZvSjd1SHBRY1RtOWtFVzQ0bTFEdEN1Y1FYTlRVS29xZTE2TUo0T3o4YXJRVldRc3VwNlpNMmY5bU4yZ0lSbXVTa2RGOU8zNXFtVS1UejRkY0czb19TUG1uUHdvdklaR1NxSGNkU0d2NGpwSFp5eVhTWDdlVlVhTlNUMldUUFRKTUpvbkFYRTFhYmZNTzZuV0kzc3VRTGNYNXNmQ3JneXRYeVZjRHFtUG8ya2I0NFNWTkVUR2xEaW5sUUpObmVQWDJwbk9ybWFhQ0k4MGVBRkhmdz09 |
I think this is an important sub for reddit. I have a feeling it will continue to grow and even see a spike when the first big AI controversy happens. | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBYjBtR0JEZ0ZLTTNrV2hfRExudm9Ka2lwVUdWdWFvTTYwUGlyeDc4aXNYU2pvTjliSEZ1aG1vZWlrOWRPWkRzZDZJU0JMOHljdUR1emNQVEZyX2NIZ3c9PQ== | Z0FBQUFBQm9IVGJCZGppZHZQLVRuWVI4OEUtaEpmakM3MU93LXBWc2E0ck5BM0ttcm1lc2JoaVRlUDI3enpUUVN4WFZsZFBqQWItREc1QmdnT3JJOGRkamFUZFdkbHFZeWEyX3U1WXlPdDFTemdxdmdXMHpudE8tNmlfemVVNVlKVFVmdmxrUmRDVWY5cVZUMC1Pa1ZTQUk0WGZXYWh0azdrNkxwSnJpQ1U5TnZlSldieUgyakF3PQ== |
Hey, x-risk isn't really relevant for this subreddit. | r/aiethics | comment | r/AIethics | 2016-10-06 | Z0FBQUFBQm9IVGJBWGNQTzZuVnlQNjhSLXNlaVN5NXIzRG11cHFvUldvdThyTzgzMWRXX09KVjZ6Q0Nnb2ZhRFZvWWszVnlNUTlaWWpobTZ6OXFLYS1RYlcwTXk3TkRjR1E9PQ== | Z0FBQUFBQm9IVGJCR1FSel9TYmF2VFBybHFJRG1PaWMyOEFscUdvcEVOSlhDd0tqNElfX25vVUw0UVlrS0ZZOUtyLWs5SU9VLWFaa3BnUkNaM21oaWoyeGg1dTcxVTB2a1BKb1N6d1dTYlFlRTNWWkRGOWFNUkZEb0dFRW1TUmh4NXh5MkwzMXVWQWYwOGNlTWhUS0hWS3JYSmJxUVd0X0VJWk1DalltbGtFMVlDOWhrQjJ3b21IRUVfa1Rib3Z0bWkyQm9pTmRHVE1IUldMT1dTYXlvNHBNNFozd1E5aXEtdz09 |
If robots totally replace humans, would that even be "warfare"? | r/aiethics | comment | r/AIethics | 2016-10-07 | Z0FBQUFBQm9IVGJBS3BUTjRTTjU5S1hjSUdaYzE2cXZ1M3hIaFlWU3ZRcU9nMVVOMWg4d20xQ24zOGo3VU5wR3MzS05HNEZjUzk1QjFtc25EM2dIZUZRYVhXZWt1OHdvQkE9PQ== | Z0FBQUFBQm9IVGJCY0R0SW1nZmNwLWxMZklwWmhOOWhvMjRma2FKYkhHdjB2T0E5SEJjbWxMcTZaTXBLek5tSmx4YlJCb3BTVFhqdkRUc1NvbUloYTc1cVRvc0ZYVnRQRi1yTjhwd3p6Z3JyU2VfU3NuYXdfVVRwNVBDczhORDdFVS1IYkhHZ0dnVGU0UE04X2tOZS1qdFRPbVVyX1hVcEhXN1F0N2NHX01sekliVjNFbThtdlV6WlhKaHVyRktFNlFTUGk5T3JacWFnUFphaU1rNUZ1MV9RTXZhejF0TnRhUT09 |
While this is neat, it's not really the focus of this subreddit, as we are more interested in explicit ethical problems for the nearer term future. I'd encourage you to post to r/singularity or r/controlproblem. | r/aiethics | comment | r/AIethics | 2016-10-09 | Z0FBQUFBQm9IVGJBR2xGcHJGU05KMkVScTRLTVphbmRSbXRYaml1N1ZJcWdGSndWQWxmSm41alFFMkoyTVFpajJUZlh5UnUwd3JialJYWThtTnlCWWVmN1lEVlJkMFFkaEE9PQ== | Z0FBQUFBQm9IVGJCSGt5ZWZZUHc4S2RPR1MwQ3ZGMFBXa05xSU45UEkwa2dLZ3c0VkFpRDRXLU5zeU9XcjNaWjVrdnk5Q1dlQm1jQTVPVy10SU4ydkgtTkZzUy0xTVpPdXBOdDl6cXhIVHVEU3JUZHM2UThieDFHYWZUa05qaGxkd3RUbG5XTlhpaEtadHRrcW9ISjNLUlN0WnBiYWdhdHh6VUM5b1FCN1M1bGNHMDhZRXR0NmJhVWZpY1EzX2NNa1hjYjFXU2F3S0Z5Q1cxM1liT3VHNHJkRkZ3OEZVdEpwQT09 |
While this is neat, it's not really the focus of this subreddit, as we are more interested in explicit ethical problems for the nearer term future. I'd encourage you to post to r/singularity. | r/aiethics | comment | r/AIethics | 2016-10-09 | Z0FBQUFBQm9IVGJBQVhqRDdsYkNBbThfTDA1dWRQOUFuZ1NIMVU0S0JpMG55WHEtdDNjaFI0c2ZJVkNwUjAwNGlnUjhRNXBUR0I0Wm1BR1RndWItM1huMGwxYUZuWU5LMVE9PQ== | Z0FBQUFBQm9IVGJCVkgxNTZZTnhFT2NmM2doQkgzR3JveG5hQkNJZzM3aDA5dy1nd0MtLTQwRy1UcE5zTzdJcHVNS00teW11aW1UX2ZaVWM1dVBIYVJpdjhpYWZ5MzVtSWhxYWl2SVVSdFc0cnBFd2JoNi1jQ2JYMUdOeDFxcDRoQUU0U0Nsc2x2Szh3cW5NWDdzV2FKcGQ4RklnekpualZ2eVJ3QVAtd3FwMFFzWk5yQk1oY0RWLV9LeUs5bVFucDVtZjNrMF8yNXpTQnpYMGFtc1kzcVFxTkh3R0k4RWJ5Zz09 |
I'm not sure if you had the time to go through the main body of the article. While the title and intro talk about singularity, the essay itself is exactly looking at near term consequences of AI (wealth distribution and employment) vs. futuristic singularity scenarios. Thanks for the feedback anyway :) | r/aiethics | comment | r/AIethics | 2016-10-09 | Z0FBQUFBQm9IVGJBWURyNDBwNk5TNEFCQzhJTnR2LUc4cGVhZGVla1VCSF94dkJsWW5xRXBEOXRCdERSSTFQdDBPLWlmME9GQzQ3dE5hSWF6UW53dUdRSklqV1l5cGV6c0E9PQ== | Z0FBQUFBQm9IVGJCOWpWUVlZVHFLWkFQOWYxYWVNeVludF9DZ2pjZUZlVlhPVUxmeXpqRjBySlhCYm1XR2hMcWo4WDdYbG1oWDlOR1RSRlBrZnY3QUwzeElINUhxSGlfN25sSHN0MG5JcEhjbG1BU3hTN1F1cTIxMkVObTA5Z3ZHSmd0YlZhb0F6RjlqSG96Y2M3cWszMm9EaG1fUXBSRTlkMzhObEhEdGpHSExJV2NtSzA1UkxrTFQ1VUpuV2wtcURHdGRWYzh3TVhLaFdOTWg3U1U5TGVIcFpJSGZ4M0xZZz09 |
Yeah, I skimmed it so I knew the content; it was sort of a hard decision, but I want to keep the sub mostly focused away from explicit discussion of superintelligence/the singularity because that turns away a lot of people in philosophy and computer science. | r/aiethics | comment | r/AIethics | 2016-10-10 | Z0FBQUFBQm9IVGJBRElEbk5fcFYzU2szbU53RFY5Qjh2RFk4SDZJTTFjSzkweF95dDAxRXo2RHBBc1Z3cEp3Vm5KQlJyQlp2UW41bDNxaEJjaFJ6LWNDb3U1TWRsMy0zbEE9PQ== | Z0FBQUFBQm9IVGJCakhzNDdvUk12ZV9qRUQ5UTRDX19Tajg3VjlqaUllSHhfR2RUb0dIZkxxYmowVTNwanVkdVloSzQ1YWN6TWxMMmZsY0lPLXZBaVQ2MmthbERSVkotSjBDcU4zX0YxYi1NN0hYc0VKSHhVa1c1Wlh6OXRjTTRuYW5hRmJwLTU4MDg3QllhZ2RUOHZwTXUtSExabEVuRHRwTHhPeGJEQjNZZ3Z0UDB4NG9mVy1jWGlxSTZTVDUwOTRpWmJiTnVBMlE3cXNRTWVFU3JIc205VUZXMUlzdnNldz09 |
haha very true! The post found a good following on /singularity
https://www.reddit.com/r/singularity/comments/56oxaf/superintelligence_singularity_and_society/?ref=share&ref_source=link | r/aiethics | comment | r/AIethics | 2016-10-10 | Z0FBQUFBQm9IVGJBZjV5Ri02OVRoa1RmMy1mTWtHQ0VLQS1mUEQyaXJwMl9OdXJOWC1DQUM0YjNNQmlNbFF0Sk5CWTl0OXA2VHlYc1FKaXpUTDd2VExhUG5qR1VuZ2hlZ0E9PQ== | Z0FBQUFBQm9IVGJCazM0U0VBdDdOYnRhVExMaTNrelJJc19aU1ZLVHFKUFladUNiak9XUFZDNUd1Tkk0TUpzdEh2VmhPUkYxQjcyYTFIOVNuOHhoSnFmZHF5MTJqSE9WNENyaXNDVmhqSGF6VlNSblItdnh2elZleG5zRjdBaE5BSHBCbEN3WTFHM3JXdk5XenVlM3J0QlhtZlcxd2RHeFYzbXBXN09ELU1RdXc3cG9kREFleU9JLUhPZUVkdmJ4cW9DVTBVMlVwZFpwenhfU3dpdEQxODQyVG5ueHV2RmhwZz09 |
They're mostly looking at the same issues and policies which have been discussed in the U.S:
>It is too soon to set down sector-wide regulations for this nascent field but it is vital that careful scrutiny of the ethical, legal and societal ramifications of artificially intelligent systems begins now."
However, the U.S. does not have an equivalent government-funded commission to coordinate policy and partnerships on this. The OSTP has been fulfilling some of these roles.
This comparison between AI and GM crops is very similar to what someone from the White House said about the two technologies recently:
>Professor Nick Jennings was clear that engagement with the public on robotics and AI needed “to start now so that people are aware of the facts when they are drawing up their opinions and they are given sensible views about what the future might be”.147 He contrasted this with the approach previously taken towards GM plants which, he reflected, did “not really engage the public early and quickly enough”.148
[Full report here.](http://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14502.htm?utm_source=145&utm_medium=fullbullet&utm_campaign=modulereports) | r/aiethics | comment | r/AIethics | 2016-10-12 | Z0FBQUFBQm9IVGJBRWU1VWxIMDZYRlh1aW82cjEzX2d1dE10WDQ1UWZ0UmFzTGRwSUJUcGhWT0U1ZEEwZWtSNGNZN0dzTDRUckxzMG9xVTRONHFoMVJKN3hvT2M4U2p4V1E9PQ== | Z0FBQUFBQm9IVGJCYzh0aVlhclh3Umc1WW9QcVVNTE93U25fMG5zYmpJNFNoYnA5c0ZyT2VKNjVvN19Kc2R2d0doNUowcVZJX0lCSUVCZnlyNUU0TWdOUkFYSHhNbWNZbzZiWDNkYW9ERU9uazRqdzN4NkUyZmNNaTgxMDVleTFuX2ZoUnlmN3JObzZ3a0l0LVI2VVptZ0dzd29zT3N1UG92VHpkMHVlcnVJcHVWTV9rY01LOTMyVXEyLTdJRzlwNXJFTFV4RnJ1NlpNbU5MR1FUNmpsdnRJNl9KLXE3NzZCZz09 |
> But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
I've made this point elsewhere, but I think this is a false choice. My understanding (which is backed by anecdotes from first responders more than hard data, but I think is still defensible), is that the right choice in this circumstance is always to hit the breaks but stay straight ahead -- swerving being just as likely to involve more people/vehicles and increase harm as any other outcome.
If that's the case, it highlights two points. One, self-driving cars are likely to be strictly better than humans at this; possibilities for 360-degree perceptual awareness, increased reaction times, and much better control over braking and safety features all mean that a self-driving car in this situation is probably equipped to come to a stop with no injuries, where a human driver would be faced with the dilemma above.
That suggests to me that we're thinking about a situation with no real-world correspondence; we imagine that we're foreseeing a problem that could arise with self-driving cars -- and mulling over using it to justify restrictions that might prevent their proliferation -- when we are actually inventing a problem that doesn't apply to SDC.
I'm all for getting out ahead of problems in AI, but... over-enthusiasm in locating those problems, and the involvement of one of the organizations that could seriously slow down not only production and proliferation but the research itself makes me nervous. I'd rather see the conversation being led by the researchers themselves, and informed by data from these systems interacting with the real world, wherever possible.
> When we did the car trolley problem
In the interest of full disclosure, I should point out that I despise the Trolley Problem as anything other than a conversation-starter establishing we have some instinct towards utilitarianism.
I think most of my dislike is driven by the difficulty in relating the Trolley Problem to real moral situations though, and the problems that come from trying to force the real world into the moral paradigm of the Trolley Problem. Self-driving cars being one of the chief instances of that, so I don't know if that dislike is just bias or an actual defensible position.
> Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.
I'm not ready to come out and say this is a wrong view, but I think it absolutely needs more discussion. An immediate implication of this line of thought seems to me that some organization might be able to override the well-meaning decisions of parents. For instance, if the parents of a child with autism had the means to pursue treatment and decided to do so, that Ito might want to block them from doing so.
Again, I'm not saying it's wrong on its face, but I don't think a position like that can pass uncommented-upon.
> Part of the problem that we’ve seen is that our general commitment as a society to basic research has diminished. Our confidence in collective action has been chipped away, partly because of ideology and rhetoric.
And perhaps, at least partly, because of the severe downturn in AI research that was brought about by "collective action" in the form of loud concern over the possibilities of pursuing it. Perhaps partly also from the wild success that Tesla, Google, medical research companies, and so on have had using the challenge of building a product they can put to work as a motivation for research.
I don't want to weigh in on the political dimension unless the conversation explicitly turns that way, but if we're going to talk about the pros of government involvement in AI we should at least mention that there cons.
> If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight
I see my character count rising fast, so I'll try to leave it at overall comments here. I think both interviewees are trying really hard to be thoughtful about the subject, which is honestly more than I expected from relatively non-technical individuals talking about a subject like this.
However, I think the final block that I quoted is an indication a microcosm of why this discussion makes me leery. This is probably the most powerful man in the world implying that AI can be turned to unsavory ends and that a situation like that isn't too far off. But he's got no idea why advances in Monte Carlo and reinforcement learning probably aren't a reasonable benchmark for the progress of time-series prediction, or even that these are relevant terms and need to be considered in the evaluation of AI.
I'm too hesitant to make any condemnations or strong prescriptions from that, but I think the one thing worth saying is that we should really let the people doing the research drive the discussion on this. | r/aiethics | comment | r/AIethics | 2016-10-13 | Z0FBQUFBQm9IVGJBdXg0bzVmX3N2TnMyajd3Y2NmczhDSVRHMU56dDVSVEQ2WTlyY2Y5VHhwbC1BTzBKbFprdnVaTTlfR2tEZDZNSzhLd1lBQTN2MXI2Vno3amwtcEJiZ3c9PQ== | Z0FBQUFBQm9IVGJCZ1o0aDJLRXlGbFhFODBYb05ETmVuYzdFbkIxSTAzVU5mQkJSX2FQOWxLd24wSXVkVS1OU2E4aTNwUDVwTl9kT3NURl9CNThaOGFES3lrbkMzZHR3d3RHNUFHSEdvdVUyRlBqOTJMMXRNRHdIN0Myb2FWUjZIdHRZRm96MkRLZ0phd2ZYVklzME5GWkdfTFBkNFBKODRNcDA3OC1jT2xsS3RPTkZkSWJiUGE3VDB1UlpHbE05UjhScjlIaGxaeUk2Z2hoSDYwdkJFQ21XbWlXUm9rSXhZZz09 |
>I've made this point elsewhere, but I think this is a false choice. My understanding (which is backed by anecdotes from first responders more than hard data, but I think is still defensible), is that the right choice in this circumstance is always to hit the breaks but stay straight ahead -- swerving being just as likely to involve more people/vehicles and increase harm as any other outcome.
I do think the obsession with autonomous cars is overblown, but choices still have to be made. Even if a stereotypical swerve to avoid an object in the middle of the road is a bad idea, there's always going to be an 'edge case', where an object or vehicle is only somewhat intruding into the lane, where the required swerve is less dramatic, etc. Google has actually programmed its vehicles to swerve into parked cars in order to avoid pedestrians in some cases.
I would trust first responders to say that hitting the brakes is the safest thing to do for the occupants of a vehicle, but I think I wouldn't necessarily follow their judgement in order to determine what causes the minimum number of total fatalities. One problem with listening to their accounts is that there are probably some observational selection effects, as they are more likely to witness accidents with many victims than accidents with fewer victims, and almost never witness accidents with no victims (or accidents that didn't happen). | r/aiethics | comment | r/AIethics | 2016-10-13 | Z0FBQUFBQm9IVGJBdWxTOURwOWpmWjZqN1p4bUg4d2h5THdILU02dTdUTjhPVHlHT21KZURtRGZTaGRHV2lNc0tjU1poUTQwMnplOVJqWDdxRUZDOVpQUnJ3UnNnNmZocVE9PQ== | Z0FBQUFBQm9IVGJCMy13U1h2MTZLaHRENnhTVWFkVVZDdlhma3JEd1NsUHlZY0RXdUdpWXVLckpRSzVpS0dORmMxaWRhZVVneTA0YVF4N2tPUFRhN3U0WWtvRUFCN2dlby1JVjJnUjFyR3NtQ092bHZaWTRRYy1uOUlZdGdRYkxiM0x5azNXRVpPelV6LXc3WjNPdXdNRHJVTVBMWURVVlJDWVdTeWVtWFFfaHlYMXRhYkt4UmJVZlhSWmp1cUcyOFFseTcyLWkzb3dZZFJFZkRsY3ZVdEtrVTVrMGtLTGVMdz09 |
Thank you so much for this. Ethics can be hard enough on its own, but applying it to AI seems to complicate things further.
| r/aiethics | comment | r/AIethics | 2016-10-18 | Z0FBQUFBQm9IVGJBdkttTG9zTElQWTRxd25fY19tRlNHZDZOei1sZXFpWjlGZGtEMzB5aEh2ZEZiM3NzZTF3b1d3QkRISm1MVEdyaGlGSjlVd3ozRXZyWGRVMHNhUjZMMFE9PQ== | Z0FBQUFBQm9IVGJCejFFa0o2M0RKRk92MHFiYVJrUWF1aHdCcEgxZ1JlRGFkbDI0MlBzMXV4dHg5VjdTZndLbElWdGsxRnNkeVZmT1NMQmEyNnZyX0xfWkVvSk82dVZvWGFtX0tPN3NSR2pFNDJ6OFdtVUZrVjVuU2d3OTAxUmZNLTNkbjl6dllKclJfR29QLU1ZbU5nZ21XaU9UazROOVdFU181cFZBMkxQb1ZKRVB1a1dWTzlad3VSTkhOLXhnOGQ2blItRzVHTE1UQzktQXlWMVBDUDlZNVZleWZENWdfUT09 |
I'd say that beauty for beauty contests is in the eye of the Judges. Write an AI that aligns with the judge's ideas of beauty, and you can see their true interpretations of what beauty is. Although, you could use historical data to see that anyways. | r/aiethics | comment | r/AIethics | 2016-10-18 | Z0FBQUFBQm9IVGJBeVJ0S242YmtOb3ZtbWljX1RoQmh0QjJfdnVnaklIVFlXOE1PUHJnS3I1ZUFzN0haRDBjSGU4U042RXZYSmlBSGgtMXZVaEM3Z1pEa25CSElJRElkUUE9PQ== | Z0FBQUFBQm9IVGJCQmxKS1J0RWVSYWgxVjFYeVlYUDRSM0hiZ21wVGdiZW8zSV95Y2VJYzB1N19mVFVFdW1KOXZoTUlNYnVvS3lOZ1gtV1Jvd3dhdlRNdWpoVHgzZVdWMGgxQ1R2NkNvNEV3UUV2RWdEVV9ld2cwUmVtMF9KbVVLVXQ1dF9LaEVTU1RFd2ZkcUxhbFJIb2RmVEd1TzR6X2RxSk1Da2tmMXI0Q0lFSGlPa3JZakRGWDNGazgzNjZEa2hFQ2ZuQW9RRkk1UmFvLU12ODUyem5ZeVAyN045aklRdz09 |
Thanks for writing this up! I was going to attend but ended up buried by grant proposal deadlines :( | r/aiethics | comment | r/AIethics | 2016-10-20 | Z0FBQUFBQm9IVGJBeXlQNkpwZW9oZ2huTVhQQ0xvSU5LSFRjQmNtajdXaW9Cakx3OEszd2N0WkVLU0dsS3BQc1JwMkI2aUZzRVhuVUxJZ1FxRlBWX1RzeUhSWmVGRmo5QXc9PQ== | Z0FBQUFBQm9IVGJCcE9DRnlhN1k3X1h6ZVpxYXN5dTZlOEdpanN4MFNBZFdnZGNJZjhvdmoyQmc2Z01mX2w0QThDQThWV0w3cnh0ZU8tYXVSMkNmOVVGalZXanhqTlU1VE0zd2Ywem5NZmx4cEtNZUlYWXdjRDk1U1c4WEdfZm9mdlpobjNOdldYMlhIdzVaYjBpLUNJeEtaR0M1bXk0TnRBWE9kd2RCdk5pZTZuV0NxcGxTeklIanJwSG1oVk9sZXh0OUNya3g3dWh4XzRySlNyTFYteV9pbDNKWU5HQkNUUT09 |
TL;DR: many mind types can be plotted in a 2-axis graph with human like as the vertical axis (humans being at the top) and capability of conscious thought on the horizontal axis (humans being on the right).
other mind types occupy different regions of the graph with a brick being in the lower, left and various animals along the diagonal (x=y).
my issue is that humans are given far too much credit in both axis and should more appropriately be placed dead center on a log/log graph. | r/aiethics | comment | r/AIethics | 2016-10-21 | Z0FBQUFBQm9IVGJBLV9JRUZ6QU14SjJqOG5qb25DaTFGVVNWYjVnWlYzTzB6cXNBUFdUSVhVRkV2Z3o0NEk3NWJEaHh1YmFFTlZYWGUyVHFlSUlFNkdRLWE5WGtQa2NxcXc9PQ== | Z0FBQUFBQm9IVGJCM0JkbnNLV0ZaQ0hTcU9fTUktZ0xKdFZSQmMzeEtxWTFnSGpFOHV6SmhVa2NhQm8wWmhXM3lfbE9DcTdFdzdtYnVHMlFrS3JZcnBEeWlmMjE0V2xZbzdjRmRBZnVlQlBSNXdHY0hOV0ZjVjVvT3hLUjRPSncySWF2ekUwc2otVkpVWjlMcWdWNEpQampTcjVIYnNNdmROSVVRRnBheHlFeEY3VlgzOUEzSFpfbkcxVkFaT2hLajhINElRNUZjSllOWU5XZzdOTENVWkJPVlB6Q2dDWVNJdz09 |
No. **The idea of war is to kill people** and/or lower their living standards. Otherwise it would be just diplomacy. Of course it is completely possible to replace soldiers with robots only on other side. | r/aiethics | comment | r/AIethics | 2016-10-22 | Z0FBQUFBQm9IVGJBQldIbFNQZV9yY3I0VE9qMEk2OGZqWDZYUEtMY3o2N2tkaFRzMDF5R2VOV19xcTVHdTNfcXN5RUtSOGFBS1FKc3VDSG9HOTRPZ3BKSkt0cjFHUHB0LXc9PQ== | Z0FBQUFBQm9IVGJCUGVWZEp4eV9vd2dLakhFOHNIM0JUTk1iVXV2b0xYeDJ1aEFqOWtmOFhUMTB2S1BjU3J0ZlJXUFpsYnNCSFdzY3ctZ3ZPZ3F0Z3g5UGR4TTk3emprMkJnaV8wdUVMMGRVU0xBdl9JZk8xdl9ZMERJeVNfWnpsbDduSHlYMGRwMmxoelh5Q1p6MkJFQ1ZTTWtXS294R3E0TnZvb0ZNR0pyMWtBQWJndm1YSF9nWG5zamVwQzZhamxIUTNqMTNmSUVKSXJ1Q3pGalVrSXFvZkxEajhkZGE3Zz09 |
Maybe someone can link me to something saying the contrary, but (with the exception of protect-our-jobs truckers) I actually haven't seen any indication of people really opposing the idea of introducing self driving cars. I've been to a couple conferences and saw no one who said or even explicitly addressed the idea that it might be right to stop their development. | r/aiethics | comment | r/AIethics | 2016-10-26 | Z0FBQUFBQm9IVGJBcVRBZnZ5Ul9YUmNUMmk2V01kQkhJeFNXbUNZVFdoeHNIRmVLOXJwMGNQSzdyUXJNZ3pYTkJDV0E5WG9ZV3RLaVExNHlhcTZfUHJVMHhQbDliS2Jkdmc9PQ== | Z0FBQUFBQm9IVGJCSG1LSTJVZUV0TVI4MFNfZWs1dkdJc0RBaWVidDN5RXJIQ2ZiaGRHMlVLaDE1T2NqZF91ck5QWEpwT0taejY1Z19SdzVscUV4cHlPVlMyUFRrQkhyZlk0bTVpRHVKLTRVb1hIalFDNEJ4STlIT0lmY0pSNHJfeWpPNkoyajNQN3p2a3BjX0VISnQ2Z0NBUDBmUFMzVFFTM05xOGpsbEhmOGtoSTVHMXNXTE5LLW40ZHpDMkhyQkNjYTdWNjRxNWd2UVN4UF9FSmo3M1hneDNnejIxTUZxdz09 |
http://imgur.com/07H4YDr | r/aiethics | comment | r/AIethics | 2016-10-26 | Z0FBQUFBQm9IVGJBRUE5elNFcjI1TGhUbnA1Ymp1UWtjajhORnd6M0RTWmw5Yzdya1R6bTc4QlM1cURJa3dFSWhFbmVnQ2VXS3h5UllKSEMtMXJ0cnZ5dy1zY2RJd19XSEE9PQ== | Z0FBQUFBQm9IVGJCQ0kwOG9fMk9OM1QwZk4zWmJncWRDWjhRVDR6LTlQaGxpX1M2MVUwUFhkVEpfWDRoTVowd203cDIzMlJIZU1pM2xjZGRkZ0JJcWZ0MnJaU3UtOEdyRFIzeWxQZ3J3TVZJTHVBLXBRMDdmM0puSTA2cXh2Qms1eW9vWWl5OTR3LXNnSTY5OU8tcWFkR0d5UUZQSDRBZGtlVVNOa3pjY2ctei1jVlgzcmhpRE5uNWJlNkR1MGtIVHZWVERUSDlib09kWUVOUWRCRllWXzJ3OXhrMnpuNzB6QT09 |
I haven't seen anything about stopping them, but I've seen a lot about slowing down or otherwise trying to direct development. I probably beat to death the Obama/Ito interview in the other thread, but I think that's a good example. I believe the average response to developments in self-driving technology has been moderate optimism quickly followed up by proposals for what kinds of regulations need to be put in place.
I don't think all (or arguably even an overwhelming majority) of these... critiques, cautions, discussions, call them what you will... are unwarranted, but I have to imagine that especially to someone as dynamic and driven as Musk that it looks to him as if he's out there pushing the boundaries of technology and improving people's lives, and there's a large crowd trying to naysay and slow him down as reward for his efforts. | r/aiethics | comment | r/AIethics | 2016-10-27 | Z0FBQUFBQm9IVGJBMmFhY2xLMU9aMUZBRmdGM0pETm1uWG1HR2ZHUUp5LXhSc2RsSlBsWDV0TjJmbHBwMkZIMVdzTWdPdGdTSVh2aWhoU0hOZ0FmNHlHNkkxNV9ERy1HMnc9PQ== | Z0FBQUFBQm9IVGJCVnBQWnNvRjVSM3RtOFBwV2RtMzZNcWpuZkxoMHBZMXhRZ2N6UjYyeVdCNzlKc3hGeEhPWGR6VHdmVWRMZ0dTRzFDNU5QQkQ5cUJtQnBqM1k1U3JtNVpIaDlMZVJvdHRrVlZWUXVNdExDQlNoUzh4TE80N2UtM0VnWmZwdmNJRXhyc3Zxak1QY3d0ZDJ4TlZkSS02d1Uydl92YUFDWm9fS09pLXVYcVU0bWk0bkV0cnZMMzItT3VHNnJHZndsTHg1WFBYUW1iUVJCZ2hqTmowaXFPaExPUT09 |
This article brings to light what movies such as "I, Robot", "Her" and countless other works of fiction have tried to bring to our attention for some time now.
In economic sense, AI and automation is a great thing to have. It increases efficiency and productivity nearly exponentially. But what escapes us people of science and discovery during our, quest I'll say, is the moral questions it raises along with it. As we create more human like robots for better user experience, we have to face he fact that we are creating human like robots in every sense of the term.
As we see, more and more machines are passing the "Turing Test", which was once the test to see if something was machine or not. If we still stick to Turing's thinking, then in essence AI is a real artificial person altogether. Of course, it's made of metal and silicon and not organic matter such as us, but when it starts exhibiting emotions, and original creative abilities, can our moral principles allow us to treat them as "just-machines". And even if we do treat them in the same way as a real human being, how are we to safeguard our existence, for the machines ability and efficiency would certainly render us obsolete in a very small amount of time... | r/aiethics | comment | r/AIethics | 2016-10-27 | Z0FBQUFBQm9IVGJBZFFQUml3VDFRLW9UVUlkVHJLdXhRS0RrS3ZmMWRyRl9FZnhTS3NZZS1lWVM0RFVrbTNiTHNMeVczb3JnRUlOWmhuMlBQcUZmZ08tN0JGZHh1aXFKMGc9PQ== | Z0FBQUFBQm9IVGJCbXY0dWp1Z0p5Mm9PakFQTWp4WXFiMU4za3NQMFhsWFF6Yy1jX3I5TlFpUHh0VGp0Zl9naHZnbHg4UzNwTnYxQXB0NTFTVVdDcFc1a3JOZHZtRGJoLWtNWHF0RG1DaW9KaE81NFFBZkNLazRLUTBibDhTU01wTUZ4TlpRMzJ2WTF6cGRudmFZOXFGbnN4YVdRbFg1ME5zZlEtZ2lvNXdzel9KaTlPV3daMjBMNXlBcmFETjgzT1VBQkZldE1ldDMtdzc5d0NXNVNCd2hLMzJ0Zi1kbG40Zz09 |
My guess is monied interests with incentive to push fossil fuels are tryna cockblock. | r/aiethics | comment | r/AIethics | 2016-10-27 | Z0FBQUFBQm9IVGJBdzVwTXJIVVRCZkRzZ0h0c1ljd3VjdEE4SUNEMnNLaVVXbGMwX2llSDQwMjFOLUlWM0xkSEtwaFRaaUpCekl3bjRDRVlqaE9Ba2ZWM2I1ems3TURWR0E9PQ== | Z0FBQUFBQm9IVGJCWFVnejlEczVDUnFnMklXMmNwRXlrUS02X1B2UG5HWEp2NTdIUjN4OVQxWU9uZGdSQmlRblk3RmhhRXFNQXFmb0txTXdKSVptSjk2TFB3TWZhelhOb0lmSlFpTVFIeS11aGtYLUJoeVlHSVNnWHllWDR2alJ1eV9nVXdJekh1UDh2a2lydVhtbUNySzNPSmtwc1hQVkhaaTJxbUgtQklndTgtaDFYOHdFN1FHbVhSUEVKTEhlTmY2RHlKdGpoWTg1Vy1tbFNPNHppQ1J6NURxOGVFODlIdz09 |
Direct link to the paper in case anyone's curious: https://peerj.com/articles/cs-93/#p-49
The methodology itself -- using n-grams and categorization of various textual submissions to the court to train an SVM with a linear kernel -- actually seems really simple as far as machine learning goes (I've only been out of undergrad for a few months and I'm reasonably confident I could reproduce this on my own), which combined with the text of the paper itself lead me to think that the innovation is in the feature selection process more than the predictor.
It's certainly interesting, and probably important, but I think it the originality lies in what it says about how legal bodies (or at least this particular legal body) make decisions more than about artificially reproducing the faculties to make those decisions. | r/aiethics | comment | r/AIethics | 2016-10-28 | Z0FBQUFBQm9IVGJBQ3JNQUROemRhdHc4Y2s0eThtaXhtU0QxR1FkaDUzVjlNQm5mb2NWaWVfRTROS1dWWXRrZ0RPcWp1OG9VYTZXeXdGYU5HMW1GX3Y5N29nMDEtMkptQVE9PQ== | Z0FBQUFBQm9IVGJCRDdKeEtkUldwa2lyZjhXaldSOU5HdXFKblZxZWtvUUUzMGZJZHlJb1NUdkY3czJxc24tOVdFTzlHbm9zd2RQNE84V0ZIQ3ZsUU1WaFEzc0lhSVBXYjJhTU9qWnhMbWozSnQwZk5Vc2F3aENSYm5EZzhQMEdMdnhkZ2VvdWdnRmFheDB1RGFOblVSVk9LejZnaDd2VW5OanhKaENsRjZBY0hVWmpXalZ1bTNRcFRtX09PQnZ0SWg0eTJ2dVJWcGVkZThiWU9lcGRSWXVDbW8xT1dPemU2Zz09 |
I don't know how surprised we should be that ~80% of legal decisions can be explained by a simple machine learning process. The real issue in law is getting the remaining 20% right - the ones which are really difficult cases. | r/aiethics | comment | r/AIethics | 2016-10-28 | Z0FBQUFBQm9IVGJBSWc4cS0tX216aXNMaXdJTXUyYVJvbUh0TWM3NlYzVWxGWTBHX1lGV01qQll2QmgwZlZtZlU5Y1VnUGZKaGNJakw1R3BVNldlMmV0MjdkRmNzRjJxQmc9PQ== | Z0FBQUFBQm9IVGJCTUNWZVg5N0JzbkZ1N2piODJuZHl2eHpld1ZCMUdxTVEtTWdTYndwelhzdGdDR0F0NnpNdlRlaE1SbUZOYUQ4TTJuMzNOaEtQMDkxU3RNdFhoSENKLWpEMzZpVm5YRmhSejNmcHNqWGhkd0F1SUFpdm9BQndIajh4MnA1VjZOcXpwbFNVXy1uZW91Wm14M3hYTHlZeVF3RzBSMEQ1ZkNjam5qajVKV2kzTWhvZFlLZ2dOX3RLYURuVW5DZUVYbXdldWpobjVOQzRBcVNmaEQ3dUYzdlFQZz09 |
> I don't know how surprised we should be that ~80% of legal decisions can be explained by a simple machine learning process.
This I think is the main point of the journal and what I meant by the innovation being in feature selection rather than prediction: What the article found was not (so much) an accurate predictor or a system that decides laws the same way that humans do as they found a list of characteristics of a case that, fed to a relatively simple predictor, decides cases in a way that's something like a human.
So, what they *haven't* done is said "we've recreated the cognitive system that humans used to make legal decisions", what they *have* done (according to them) is found what it is about at least these cases that influences a human towards one judgment versus another. If that makes sense.
Another way of putting it is that this paper is saying more about what a claimant or interested party in this particular court (and possibly courts in general) could do -- file an *amicus curiae*, structure their arguments in a particular way, etc -- to increase the odds of a particular outcome than it is talking about the possibility of an artificial judge. | r/aiethics | comment | r/AIethics | 2016-10-28 | Z0FBQUFBQm9IVGJBUDcwWGwxSXFNRWgyMzJyWlduS05LSHZXMlNodVU1VGx5UzdETDhBMzdPVWUxdHNUbWZBWExZZ1g4ZUh5aEdVQXltLW1DazlQM1NHMlltNVQ3QWNUbmc9PQ== | Z0FBQUFBQm9IVGJCN1drWVhxZXU4dkFwUGdzRThwNXh4NlZpdlh4ZjZfdnBvMTRlS2oyMzF1ZmJ2OV9Lc3JCbzg4bXNnemdScWNCYUIyN0tUQUlHMWU2TWszalpMSWRjNTI2YUFsQ1ZoY054SFRCUjdYanpjZnNLNV83X0ZGdnhUSDFweXBZNW1tN0lETTdIdkdkQk9SSklEWXBTMGl3U09LMTJ4QTdSaGktdGszUjM4d2xOSzVqekRtQmJQUW5OcUljWDhKTjU1ODBiSlUwNGpzbFE3SElXam05YUpmaDh3Zz09 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.