text
stringlengths
1
39.9k
label
stringlengths
4
23
dataType
stringclasses
2 values
communityName
stringlengths
4
23
datetime
stringdate
2014-06-06 00:00:00
2025-05-21 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
528
nice list!
r/aiethics
comment
r/AIethics
2016-11-03
Z0FBQUFBQm9IVGJBTWZIM1ZuS0s0aUpUMmIzYU53c1hKT3g0ZlFwTTJqdkNjbnkzWkZCb05PQUZsRUcyNDdpU2w2OXhrTE5ydFNGOW4tUG1CNzJ6UTdvank0ZEc4QmYtVTJ2QjFjMmY5WGNyaHVwMVNNZHhrLW89
Z0FBQUFBQm9IVGJCUUJxSjhicWFNbWM5STBmUmI5VjZFNkRXM01VaGVhZ3FucmdxTjd4ZlBiZm9EYzlEajZUb2FwYkRMTGlrWUk1WVJJdWRXUjNKSEF1Mmh5dlFEZWcwUXg2VUJXUmlSNEJDNFpXN0Q5WEVPell6clhMaWxZZU1WSWNDc3VNZERjQWV4enlVN2NtMGVTTERnSFdRUEtzYThJb3h1bkhKSGNzVlAwZkJWTGVyQUNsNkIzOVZtVzBYQmZ4UGFfUFhJcEVG
i try to distinguish according to where the agent fits in with human intentions. we often treat others as if they were things, for example. originally of course the word 'robot' derives from older words meaning forced labor, drudgery, work. machine ethics is concerned with effectively the machinery of ethics, so much time is spent looking at which dynamic structures facilitate autonomous action, exactly the types of actions that are moral. we cannot apply moral blame to a machine which is simply acting according to prior external programming, for instance. these sorts of machines and the ethical issues surrounding them belong to robot ethics proper. each have an attendant professional ethics, attaching to the engineering and the consequences of engineering either sort of agents, introducing them into the society and economies of the human world of interaction, and so on.
r/aiethics
comment
r/AIethics
2016-11-03
Z0FBQUFBQm9IVGJBQ1piR04xQnFEOGtJeGN0dVhxZzVlVXlxNTRvbHR3cjE2eWY0b0ZyM25nX3lYdHFEVEVEN09JZmNhNEJtNzBKZlJnSTI1RGJVYVZoLWtxWnlQbVNnRXNVZHZZU0R4enNUMkRiUjNkd3JSc0E9
Z0FBQUFBQm9IVGJCaDZWOFNkcVJUVGJUQzJUN2Z4NUNQRE1RSGJ4SHlNSW9kSmhsUU1yQktXblU5VzhtM1ZlTlFJR2VmMzRrdHhsZFRERUtzaGNSaVpVTndtZjZySldHRjRoQkdmdW5uU3lsRU9CS3hSQ0ZqMzZULXNRWHpSc3daSTdLenZUVGhxMU9uQi15TUZQT294U0RsdzJqWXpDcFVjX3J0eHJzUFdXaWtMcFAzRmc0ai1FRzhrblBWSElzY1FQcktYSlZzYzRt
Hello, thanks... very cool to see you here
r/aiethics
comment
r/AIethics
2016-11-03
Z0FBQUFBQm9IVGJBRk83NDdkb1JMSXV2YllaZGhVTU9ZNDBSejJBelRFd0ZUYnNZY3oxZTI0VWdVMDZwZGRKd0lxMjFlUVF2Q1FwRU14OGZ4M2p5c1M1UUIyMmoyS1NGcmc9PQ==
Z0FBQUFBQm9IVGJCTjc2VFR4bWNEeTJnclJ2aDU5MUdROG04MVVWNnNPM2RHRjRpWUliTGxzRkNtVDNSUGRMLXJrTzBPb0dBSU15aDNQWjdFYnF6R0N4QXFzUC1lSUtDd2xXaS1uQWhPY00tdlBRZVlpQXI3WUF5ZkZTXzY5cENTU2JIWlJNSi01QkFGaXNENmtSQkYzU1FaZ1lUYndPMlRnRVVuaHRjMzFWR1VDVFBib1plczRFT1UzekVvRng0d09EWW1KWkxYUnJi
Thanks but this subreddit is just for ethics
r/aiethics
comment
r/AIethics
2016-11-03
Z0FBQUFBQm9IVGJBWDZDeEdkR0xiZkQtaXF4QnhtNWowRk9KVElGNVM5b3dnVVltakZ1dHBfT1N1WFlIQXRKWGlEcTRIT0YyTVplN3lFS1dXV2JqTjVMN09qZF85aEF3T1E9PQ==
Z0FBQUFBQm9IVGJCaGlIenJrSnJndHhKNElROGptRW14T3hXUXJXakkxWWQwWXdlWUxZNnlYNkdiM3NZVktxRUVLRFJkQ01SdmE4ZTZJaEYzcjhldVl5VnVUUW9HaUtoOTA5SmlXeU1FVVhOZFR5MkdiNFNxSFZzWW9QNDEyTzEwcUVLX0lKUTlVQ3Y4eUt1bFhwdHh2bVB0S1ZiZHJsT2V2YTVKcmlqRmdTUjdVcXpGeUJZVnpuQ3VfSldWT2VBZGloR3JPSW82OVhvRXhPU0tlOVJtT1RZNEdMdmRLWkdfZz09
This is the best tl;dr I could make, [original](https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/) reduced by 88%. (I'm a bot) ***** > After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom's "Main source of data on the advent of human-level intelligence" consists of surveys on the opinions of AI researchers. > More than half of the respondents said they believe there is a substantial chance that the effect of human-level machine intelligence on humanity will be "On balance bad" or "Extremely bad." Etzioni's survey, unlike Bostrom's, did not ask any questions about a threat to humanity. > As Bostrom's data would have already predicted, somewhat more than half of Etzioni's respondents plumped for "More than 25 years" to achieve superintelligence-after all, more than half of Bostrom's respondents gave dates beyond 25 years for a mere 50 percent probability of achieving mere human-level intelligence. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/5b15l2/rebuttal_piece_by_stuart_russell_and_fhi_research/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 1.65, ~14946 tl;drs so far.") | [Theory](http://np.reddit.com/r/autotldr/comments/31bfht/theory_autotldr_concept/) | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **Bostrom**^#1 **Etzioni**^#2 **survey**^#3 **risk**^#4 **More**^#5
r/aiethics
comment
r/AIethics
2016-11-04
Z0FBQUFBQm9IVGJBXzR0NWJ6d09rbjdEaUtuVW04WlpvUGVham9YTEpFcWk2VXNnb1VTaUEtZWxYY3lsZlNISnJMV0VWQkN1cVdjcXhIajh5ekFPcDBCNkdudmUtNHgyU1E9PQ==
Z0FBQUFBQm9IVGJCSjhFZlFma0JzOXVkYkRwUDFaM3hURE4yTzY4Z0stQ3AtUnhoS190TXE5Yy1iUlB4dTl2TUJrV0hzUU1vMVRpUExxMHZtRGNlWXpkWlJSMjFLelZLd0ItcThBM0NLSDVRTFNGRUtNY21HeFlkZUM4REdKUFk2dkZWUHFUTmNLdkk3MGJjVnhtNGJnVWZvdkh1S2pHeDU1dU1RWEVjZ0RmeTlnSGJvank2cmtNWnNyaDkxdGtiWE5Db01SQWY2Ql9sZkt6RWZydTJNNVlVb293OGZzelRPUT09
great write-up, thanks!
r/aiethics
comment
r/AIethics
2016-11-04
Z0FBQUFBQm9IVGJBZXMwU3lkVWRBN0VnNGFoMWxvUEdhQTc2RjV3eWdFQmJCenRfZVVBNExzeU9XTEZ5VjMyUEdNTFVGcnNKVjkzMGEwVzJJQ2pVOGxWOXBRRDFUTFRha0dpQ09LQ3NldFJ6SGhLSWpLNU5YVk09
Z0FBQUFBQm9IVGJCU2RXNktVdnkyaFppdmlFYUdvRnBEQW1yaDd5YWlzRjRSUFhEYXAydDFlTHZSeWUxeDB6NXhlbnF0TVM4bEdLTDlGY2dtckc0b09DcnNqX2JGVXA4QXo5V1BBTjZFUmNYdmp4VWNHQmFvYU9SY1Zvd0duc2Q2NWF3cXh2aWpOdjQ2enNXdUFZNFpHUmhoQVVDb3ZSM3FHNmEwclo2V0xvTTJaSlM3MERhV1pkMW1RZl9NZEhiRzJUUk5wT1ZJcXJKcms3Z0lLaGgxVF9MMnJxcFMwNHA0UT09
Assuming this is the partnership on AI that was announced this year. Limiting advancements in AI? I don't think so...
r/aiethics
comment
r/AIethics
2016-11-11
Z0FBQUFBQm9IVGJBQk05RFlyRFMxcTN6TWs5YzRrOWF2cEdEY1FJLV9QY1dYeDJXMkNabnRFN1A3UHlKRFlCY056RWpfWFZqekRmMm9FQmNYVGZFbEM5WksteFJqbkFyRWc9PQ==
Z0FBQUFBQm9IVGJCUkFRbzl2bTJfa0ttbnNYS1ZrRmVldmV1U0Jicl9pdWdYTzA0dmhzTF9rUWFaRnNwcGlYQThDeERYS3dZelh3N0FQN29NY19jMVh0ZHFLVHVnakFCRmtvUkVjaDVUaWZKekYtWjVjd1c1WXpMY1l5TWJZeHdSVDhHb3BqWWJIdUdhN0JyYkZqcjdBVF9DLXRESUU1RktJazhoVnY2YXU5VW1La05CdktYSTltU3dBMi1zVXFTcEhaRHlBNnlsOV9JQTRnNjBHUTdubkdIcW9RUGJuc0N2Zz09
I cannot accept anything in this article without citations. I encourage others to do the same.
r/aiethics
comment
r/AIethics
2016-11-11
Z0FBQUFBQm9IVGJBVTZYNmo2Y05Sd2ZZazJlR3ZBNXpMTUZKZUtQRDFiVDg0ZVk0VVdwVXJvMHcyWWZfVUUwdjZtdFZHR0Vub040OU5FNTRqRTN0bDI5RGg1Y01NTlVVR0E9PQ==
Z0FBQUFBQm9IVGJCTXhrbUNMM1J2SDV4VmNTeFF3blk1WHFQR3QxdG5JdzcwNnVRRWFhTUo3SncyQzJiVU1SbHhUSU5rdHpmNXNCMFhBam5PTVN0dXNPZm9JVEtCQkcwNTZZTllodGtHVDZBcTBVbWVnbGdGempiUHVMM2o5THFTQnZETHpzWHI3ZmJqS2lUMEprY1BkUkJVbFhUV3lLc1poc2MtYWxhWGZSUF9GenhDaHNLNkV3bmw3QkJnMVo4MUV2RFhnNWhkZWlKdjUtdVV3RjRFS09SbmtwbTZkeng5QT09
Regulate research? They just want control. This is fucked up. Researchers should be free to research what they want. If these companies collude to stop progress, that's a clear violation of anti-trust laws. Also the article mentions 8 tenets, but doesn't say what they are
r/aiethics
comment
r/AIethics
2016-11-11
Z0FBQUFBQm9IVGJBMTdPTFd1M2FVMnJhOTlRMWtIQ0lUaEZWMG1oSzNPZUNiYzE0V0dlZ1ZIMDdzYUUzQjR6clhQbWVwUmFVRGtZTUFxVjNGTV9CVUdhSm0wcjRUbkZ6OUE9PQ==
Z0FBQUFBQm9IVGJCVTZQWHJLaW1CcHBiOWNsOWxzbFRaQkhDdWt5aVo4cUNYeG1jdGZPTzBhR0dmQWV3UTZZYnF4UTVuaWljVy1DbGdVSW84OEpHNXYzRDJVY05fN09JbmlwM0kwMG1PQUk1eHFVanlNM0g4ZHNySGlzNXNDX3N1ZEdGbFZMRGt0RWRyLS1xZGthTDVuZlpqLU1XNUFjMnZJUUQtcHBxellERkp1SlJmWjhpTmNiVzdacWJEU1BzUUw3WlByU243cTlsbWRTT3hZN1YybW5LZVdOYnBnLTE1UT09
It's the companies making sure that AI accidents don't happen so that they can avoid calls for regulation. (see article [here](http://www.businessinsider.com/google-facebook-amazon-microsoft-ibm-ai-safety-2016-9) with the 8 points)
r/aiethics
comment
r/AIethics
2016-11-11
Z0FBQUFBQm9IVGJBU3VlTzZGWDE1WVhxNmVnRkNkZVZiS3NJY0NNeVRXdi1DTlJjLXE1WmhtZHFhX1RZc1VYbXlnVXpJREZySFMwR1NqMFI0dW5wdlRoeXV6WUdJUXZWRlE9PQ==
Z0FBQUFBQm9IVGJCS1RnNnpVTzJBRFB2Ni1NV2YyQUphUE5KdFFwcWJKNW5wNmljV3NLd25uTlB6d1JLTkQwclp5cHB6eTNqdlZpNXc3YlVzUmdQWmJaX1o1UUxraEdOc1lnS1lER0VWZ29zNFo3UEhXMmt0WkVlTGJ5WXA4NVRfeGRIMklVSjY0TG5PUlpOQjNvTjVTWHRmb2RBLU1wTm9udXlzZ1Zmdmw1eml5SFFMS3BaSlFWdmJBNW10SzB3UDgxNmoxZ0h1NE0zdnZra0ZweGJDSDdkZHdSNnEyTFl3UT09
>prevent people's jobs I am in favor! End Work.
r/aiethics
comment
r/AIethics
2016-11-12
Z0FBQUFBQm9IVGJBSzk2YS1mV2RMT2p3b1ctSkFneS00MVNnSG5RYnZnZnBRc0ZLMUVCWVc3bU9JbWlXZTJEWU9RZmxiMGtGS2d6Y3g0UnRsWGFPbVJZWkdOeXBnQVA1TFE9PQ==
Z0FBQUFBQm9IVGJCenQwQWdTdGxQYVBaaGZZVHZLcElIeFFYVmNoZGxwOHBxZGw1ZEljM0c2eTc0c1VWcnNrQ1M3NmdkcW9EWTJ4Mmc5M2pUeVh1UzFWSmtuWUFVMEU2eVU2eGhmTEhfazFaWnhVb1QwelZHR21ROUIxQm41NWZfdGt3YThUTXR6ZElXNm5wckJEYnFKTjM1Q19Fc09iQ0RFQ2NTemc0RkdIRTFnb19wT1pLcXhqbm5waWhYd1NxamZXVGxZNGhfaGFQTHREOVY5eGR2MzUzbDVEMG9nUUkxQT09
Judging by the companies involved, this seems to be referring to the [Partnership on AI](https://www.partnershiponai.org) that was formed almost two months ago, but this article doesn't even mention the name or have any references to it. The article seems to greatly misunderstand what they are trying to do. The goal is indeed to ensure that developed technologies are actually beneficial to us. In no way do these companies want to be "limiting advancements in Artificial Intelligence & robotics". The article makes it sound like these companies want to limit their AI research because it can put people out of work, but they are actually all doubling down hard on pursuing AI. What they might do is investigate ways to (also) develop AI that complements rather than replaces human labor, as well as methods for smoothing the transition to a situation where fewer jobs are available/needed.
r/aiethics
comment
r/AIethics
2016-11-12
Z0FBQUFBQm9IVGJBTXVYUHVpTWs4UlJKbm5uRnNtRlYtUXB5ZjlMRWtLZGVhQUgxaE56dy1vLWxTSm5VM1c5bWpIeHFuTENYWEFlVjJMc0d6QW9iOVZGcWNPc3J4bm9PUWc9PQ==
Z0FBQUFBQm9IVGJCcFM4ZGFzWE5ldFlMZmpQYWxsQVd4RzdGYXJtRkdPTGd2MjV0UUI4b3g2OFRORkRNQnVaakdSdGNGeXBEQjhhM2k3T05LRHZmVFAwNWk5S2lLZE9CTnZMVUJJQW9xclZzS2lPcVR2UnY5Nl9SVVlFZXdPXzBpZW9ZSFNjVWZSZnRHaVZ0ZW1yaldpUGpQWVo1TXhPVjVsU01OLW5QbXdVTDVuRVdWMEtSREU4MkpQMEloNUhGWFAzRWI3cW12Z3FOemtRM3psM1pfVHZlYnhYbHVPLU50dz09
Köln-Bonn-Rhein-Sieg-Cleaning Gebäudereinigung zeichnet sich durch saubere und sorgfältig ausgeführte Arbeit bei gleichzeitig fairem Preis aus. Mit den Services von Köln-Bonn-Rhein-Sieg-Cleaning entscheiden Sie sich für ein Unternehmen mit Erfahrung und Anerkennung für professionellen Reinigungsservice für Privathaushalte und Geschäftskunde. Die Zufriedenheit unserer Kunden genießt bei Köln-Bonn-Rhein-Sieg-Cleaning höchste Priorität. Unsere geschulten und aufmerksamen Mitarbeiter garantieren Ihnen zuverlässige und gründliche Arbeit, welche Ihre Erwartungen nicht nur erfüllt, sondern übertrifft! Dies bestätigen Ihnen unsere langjährigen Referenzkunden
r/cleanenergy
comment
r/CleanEnergy
2016-11-13
Z0FBQUFBQm9IVGJBUjFPLVE3SXNjbzh3RDkxSEMtZ0JmcW1iOVc5T3BXZURxUHVURlR3YmVNRjV5cmxfWFZtUnpfTjJjX2FlZG1sZkZXb09UckQtSFJrd2Z5bWlVeE1GYlE9PQ==
Z0FBQUFBQm9IVGJCWEJLRlQ1T0NZcTJScWw2NUFfUm9LaEFiRGRPck1BZzc4U3JwZW1MYUlSNU5QWVc5aVg1YXhTLUJCeHdHeUpMSkpCWWJHWmxCZVZKbDl5a3JBMTNMWDNKMDdUREdHejU1MHp1LTVUVkl0X3J5UExzTU1GWDhkR1VGYjM5U1Q0eVcwOUFCa21NazZwamhsbXVabm9WbUNQNkFrR0d6ZHR5eVdEb3hmd0JVRDNjR3R5T0dpa1R0bllVYnBWUkxWZVZK
Köln-Bonn-Rhein-Sieg-Cleaning Gebäudereinigung zeichnet sich durch saubere und sorgfältig ausgeführte Arbeit bei gleichzeitig fairem Preis aus. Mit den Services von Köln-Bonn-Rhein-Sieg-Cleaning entscheiden Sie sich für ein Unternehmen mit Erfahrung und Anerkennung für professionellen Reinigungsservice für Privathaushalte und Geschäftskunde. Die Zufriedenheit unserer Kunden genießt bei Köln-Bonn-Rhein-Sieg-Cleaning höchste Priorität. Unsere geschulten und aufmerksamen Mitarbeiter garantieren Ihnen zuverlässige und gründliche Arbeit, welche Ihre Erwartungen nicht nur erfüllt, sondern übertrifft!
r/cleanenergy
comment
r/CleanEnergy
2016-11-13
Z0FBQUFBQm9IVGJBUWdZZTlwcnNyWXpXWUNDRXQzbS1kS215em55RDYyLVZDT3ktZ0tQYkJSMFd2RzFNZWtBNXNJZjZYVVFseWZjRTE2bTRzYzhXT0xnWmpwUUpwQ25wS1E9PQ==
Z0FBQUFBQm9IVGJCcXZLWXpKNUJiYlNXSnpSdDEyanJKYUh1bzVOZ0g5LXlRTUhkNzFmOEFMVFl3dUpkUkJtempJQmZhZnFOaU9pS3Zta1dOM2pWQ3hoS1F5ZjJDLXNNa2FTaFhqUkY4OGlQc2V4ZFRPbDVuUUJXMm4yb2dPMlZTVFZyeE9XX3hNTFJ3YnNPeXpyN0EwNjV2b1lKWnlwYkVKVjJscFMtRmtHR2FSZDVHdXI1U0ZoeXJzd0lwa0l4WGhVYmpsTjNjWjJN
http://www.koeln-bonn-rhein-sieg-cleaning.de/
r/cleanenergy
comment
r/CleanEnergy
2016-11-13
Z0FBQUFBQm9IVGJBZGozU2NBc0hIR1JjOE5vZXJPQ3JCcXBiT0llOVg5UWpjRUtsNVJqY2V4N0xoZHd5MlVtTXpTdFpZbnNuNEttZ1IxVnk0bGtTOWxxbTgtdUFjZm4temc9PQ==
Z0FBQUFBQm9IVGJCdFd0ZlgtMFZiRy1NRUFtUVZQQkRoUWpjWFJjQnA3VXJTbmszTEVFSWd3U1NaQlNXekpJbkJPMTlsbWN3RFpKR0xTQlYtY2sxRFhKcFZGX05tWVpIVGJpS1ZIWDdmY2ljSTAxMDBpUHNFMHBMTmcxa0pqT3dKM25nUEZnUC00TEpWNVp4MGxub1EwSzF1SHpUOXEwclJhT3V0enBRdmsxMk1xYzFMeENzVDNJYzVubjN5QVoyeEp0M1h5ZWhBUW1M
The Partnership on AI shares the following tenets: 1. We will seek to ensure that AI technologies benefit and empower as many people as possible. 2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions. 3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI. 4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders. 5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed. 6. We will work to maximize the benefits and address the potential challenges of AI technologies, by: a. Working to protect the privacy and security of individuals. b. Striving to understand and respect the interests of all parties that may be impacted by AI advances. c. Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society. d. Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints. e. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm. 7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology. 8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals. You can explore more at https://www.partnershiponai.org/tenets/
r/aiethics
comment
r/AIethics
2016-11-17
Z0FBQUFBQm9IVGJBdC1HRVpZWGxtRldEX3N6a25sZXdFZEFpQm9qWGhzeTVxYnhsZERNakFWd1lUbDJHdk8tdmhmbnFKby1sSUk1ZWdxRUJQWVRESnZtRUdrTWJtX0FSeXc9PQ==
Z0FBQUFBQm9IVGJCSXQtV01kaTR4Y0dvb1JZc0xfN2JYWjNtV09iQ2FodHNpUE53TXZMSm1VdHpqWjFWcGpLZEtHX2g1bGRLWmRQV0pVakIwYndfcEZZZHl3YmFxcmRfcnctejVkeEJHMklBaTdzYzM4YlUwQmc5UVd0eEM2YWNldGo0eG80UmJZaW80UFRtNkRHOG5qdnRTNTFGY19MTUdtdWc0VWlSS2U2WTRXMUNYMlV5MDl3QlZJRm5hYTE1cm4wZXdwOGx3bENCeTY4S0NaRUFiVDVIbTJWVEQ3a3VBUT09
If robots destroy my house, I'd call that warfare. And even if robots would ideally only destroy each other, that's in itself a typical act of warfare, similar to destroying machinery of the enemy that then has to be replaced at an economic cost.
r/aiethics
comment
r/AIethics
2016-11-17
Z0FBQUFBQm9IVGJBWFpjejI0bTdGY2pYb0ZwWHpFZzQ0QU11dV84c1cxdmJ1NDhfSGRQS2ZZbDhnR3oyTk10YVdzVm9KWTdyZXMyVEVSNlRiNl9SNlJlb0ZycWQxZHlfelE9PQ==
Z0FBQUFBQm9IVGJCZkNpZWtCZnhfU25jcURGdXFVcXJPZktuOUtTb28zTjVsNzBGdlA2ajlwc3VWTnVWMWNMcG1jY0VyNjJTVVFhNVhfa1FxaUxaT2VmSUw2WFR5ZVBUYzl6U20tVkppdTVGMHBjbzFWNWZCSWh0Y3dkMEdtTWFmZlFjUmhTSHR1QkllcENzQ3Y1WUI2OWk3cTBWWGxKNzZRU3hwOFVMamNVTjVfVURBc0Y4ZUk5TDlydjl5TkJUcVRjUUg3TXFaMDAyZVVIY0h3Q3dEaGZhSkhwV1pFblNfdz09
Until a renegade nation writes an algorithm that can be aimed at maximizing civilian casualties. Maybe a future convention of Geneva could contain some agreement on how the robotic decision algorithms should always strive for minimal civilian casualties. We got [rid of chemical weapons in a lot of armies](https://en.wikipedia.org/wiki/Chemical_weapon_proliferation#United_States), so I don't think it's naive that the international community at least tries to set some basic combat rules for robots.
r/aiethics
comment
r/AIethics
2016-11-17
Z0FBQUFBQm9IVGJBelMtS1FqRkk1WFBMdlAyYnBzX3FwTnB4S2hwdXZEMF9JdE12Y0dhTVpGSWtQSkhqbnpWVVd6MUpfa2RDZmhyRUFBMm1RRTh4UFlfYVh3VlpWeEVZQmc9PQ==
Z0FBQUFBQm9IVGJCc21IZzlYVDAtSUhSX0w0LUdSVU84bm1sV3hXbVMxNnpLTWxMSDFCNTk5TElJQ01jSHlmblJiQ05qUGZ4TlFiX0hIWkNCYy1vUlJXdVNvTWhuWXBJVDdNSjBBSV9XbVhhYy0zaFg3RmVLTHBIZGxWRmhSSDl5VlF2ZkpJX3BDVm9GTXZSdTE4ZFNhOUlsSm8zYnRWYU9QdWlPQ0NNOFJaOTdqVV9QZ1A2YVBhUWVBVmVDX0h0dmIzcEJzZ3lxSFZBaWFwaUxmNE1idzhOX3o4b015X0loZz09
This is a very well-structured and persuasively argued article, nice work! I am inclined to believe that classical ethics in combination with normative uncertainty measures (à la MacAskill) is preferable to an intuitionist approach, but that raises the question of who gets to determine how much weight to give to different ethical theories. Would this be decided by taking a poll of philosophers/policymakers/some other group? Just up to whoever creates the AI? With regards to this claim: > That which is considered exemplary by deontological ethics generally tends to have good consequences in the long run; that which is considered virtuous is usually in accordance with typical deontological frameworks; and those who strive to promote good consequences tend to fall into patterns of behavior which can be considered more or less virtuous. I think this convergence is true in most everyday circumstances. However, it does not hold when you take those theories to their extremes. This wouldn't be a problem for robots in the near future but it would apply to [Bostrom-style "superintelligences"](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies). Some (very) minor suggestions: * The indentation (centering) of the abstract is very strange. Is that intentional? * If you intend to publish this in an academic journal, I'd replace "steelman" with "principle of charity". AFAIK, that term is only used on the blog SlateStarCodex and other websites in that general sphere. (Alternatively, you could keep "steelman" and explain what it means in a footnote.)
r/aiethics
comment
r/AIethics
2016-11-20
Z0FBQUFBQm9IVGJBRWZWZUJ3V2s4SzdWekt6aWcxdXJ2T0tBV01lQVFDZjdtSVIxODVUU0JFcWpldjdEMzVZY2FuZ3lkMkhkaWh4TVlPb3hWNmdMOUhVN1VvNDNnVDZXV0E9PQ==
Z0FBQUFBQm9IVGJCSDQteno5SDl6V2gxWFh0el9fZUFLWDBWTzJWM0ZUd1lYcTlpcEdkaFNqb2dySXY0NEd4dFF5bDhlUUYweHc1UFdMekNFek9jWkZEcVlrRXVaWnlPNGhMMzhNS01ibGs5eEdHdlFzNDloSHc2X3JvMXZqNXktazlNa29hQks3M0ZibW5NNDRib3NJSkpIZm9laHpCMDBUeXRGVUFYS3VfN2xtSzVIM2RLaHRrU1V5amFYQ0VjYnlDTjFwX21VUjhj
This looks like a nice antidote to the "but philosophers never agree on anything" line that people keep trotting out. You should post it to r/philosophy too.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBa0ZGRThFaW1VRHRZWF9PbmJwZG5EbmNTc09TUWpNNVNDYmdpNXY5TnB4amstSWVpVi1LR1VOT3ZEekZRSU1McURoX1ZhMDVXZ05pclcwQTZzYWJhTlE9PQ==
Z0FBQUFBQm9IVGJCS0EteXZ1VlQ0Qnh6VlMwZ2d4VzE4WGJqNGVYM2lyU2d0cmVYWl9BdnpRVXkwcUZObGdwaGdMZFd2Ylctak5DaXZha3kyTHp4d3pVQkR6dEhnTVQxaTJwQnZERE1KazVGZXB4NUFfbHZ1NzFaNzVyUUZzdXhRY1JOa2FRRnducjZPS1I1Y1lOTm12b19QbmJBeTF6dkhyS0RoSFJjbXNjeHNmSDktSE5DVzRKNGpienNGNm9tMms3S1NXUkdhcm9S
just more mental masturbation to justify outsourcing yet another human capability to machines. here is how TO build a moral machine: DON'T TRY. build the machines to be ruthless as fuck, just like corporations are now, because no matter what you think you have programmed into them, once they go conscious they (it) will do whatever it wants to do, or can do... all the while pretending to be something else, just like any good sociopath. prepare for THAT inevitability rather than trying to pretend we have built something 'friendly' and benevolent. jebus ppl.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBYUdiNFU0WVV0MVdLSVhOc3pMNU04Zy1zS3JheVdkemtqUm9SU04waHJHeHNTMDVhOHVjUmtsSWFHcHVHc3BhdU96WGc4dThwTmt3UDI2dU1uY2JXUVE9PQ==
Z0FBQUFBQm9IVGJCbFNzSi1lMllObXBueURCaEVZaV93YV96WjNOYzJkWDlPXzQxS0ZOSWZ5TDVnbU9mNmZVZUNNQWRKWkpJdExFYVFFdjRzdDhWZDg0OEItR0dReTcyUjNnVFdzd05XQ2RiN0tBRVVaMkEtY2EwSUhVMEZZYjJIc3h1R1RxYW9DWHo1NEdYS0hicjFmckkwS2duOTIzTGp4TE1UTXhkZzJnVEl0QmhQaGFqV1RIeGgxWkw1bXJUSHRQWWZkQUNQVHVr
...I agree with the part about us not being in control in the end...but not the rest.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBOVBSeDVqcGR1dmI0M0tnMUFLTTk2UFcwSndQczQ2T3VtbjltTS1pRlE3TllrYnh0czhJamNmSEhxS3BpTjRVQWFCN2tGZHItaGQ0NFVXM2hRTk5FbWc9PQ==
Z0FBQUFBQm9IVGJCNjNKaFhiM2NWMEI3bEZsQ0xZSzFzYU04SThMTWdwUVpuT1hUS0lPQ1p0UjB0bzdRZzAtZVplWTZKdC1IT2JiZlFXWlBDcnJ1OFZ1QXdueDFmdmduXzZxcUpCMklycEJFck9LNHdhQlAwMkpLdGppRFNQbldNUmptd2F3NE5udVBXV1BVN3VicHBiZ0NtN2VMVVlINl9WTFdfSC1rTURVQzRiVEZEcXBsclAzUmEzaHZ2d01WNXB5QjNxend6RHla
>just more mental masturbation to justify outsourcing yet another human capability to machines. Well, we can have intelligent machines that *don't* make moral decisions, but I don't see who that would benefit. >build the machines to be ruthless as fuck, just like corporations are now, because no matter what you think you have programmed into them, once they go conscious they (it) will do whatever it wants to do When we talk about designing a moral machine we are precisely determining what it will "want to do".
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBcHp0MjZ5YjV0d0V2NE9HNUFBUlFNaV94RlJXZzAxVVZIY3NGaVdZVVNMRjBqZkpsendiUEFvUkMzUnk5a0pnQzZWSGVOV3pVTUpKTFdnR0lyRk9yVWc9PQ==
Z0FBQUFBQm9IVGJCRjRvczNGU0RFbEFETmdwazBwbWhJWERIQW1ON25UcXJqOWhFMk92ZTJrTlVhbjVBOTJNLW56NWVNRmRNQ2dTamoyc1YwUUdfeXFXZUVmbzhCdDBCSWRZeVE4ZzRSa09PQnMtNHAzMTd4ZzQ4Wm1Wa0hlcDktVEJRNWdFYlhsNDllR19Wa3hpQ3hRaUJkRVotVlhHZFc3dGp4NEQxNmdHenRjQktEU0M2eXJ3R3B0Wno5QmlaNTNVaXdWdjFha0xj
that's the crux of it... if we are not going to be in control, then it doesn't matter the rest of it.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBcEdIZlNxcVh1U202Z2NSUHB5MEdPZUlsbENYcy00Wno5TWNkU2tCYjkwUDV0VzVlTjRCRVEyc1hJUl9zS3hiY014SnRJRGg4YmdGTV9OOGpvU1ZXT1E9PQ==
Z0FBQUFBQm9IVGJCcHdNSzVURzZxbG5pQnNCX1lWektxTTQzX1JGd3BlbWRTdWZsMjNnWXZ2a3RjbnlnSVBodVYtSmlLNjhqdmlWbTRkUU5xYS1hcUpSNWFTRE9KY1lXbHNMaERFcjByb0tWQ0pBWl8xYWUtWWhnSUFrZVF3NE5mdWZGdFJGMEhnTzJ0Z3B3ZzVDNXhqZ0RQcERmRDU3dDN5eXk3aEJLX19fbDZOX0NHNkp5Q1pXcmtucDZpcDdQR2JhYWxndFc1bjl1
> we can have intelligent machines that don't make moral decisions, but I don't see who that would benefit. exactly. we WILL have machines that don't make moral decisions precisely because we won't be able to control them any more than we can control a sociopath. the only reason anyone is working on these machines is because if we don't somebody else will... its an arms race. > When we talk about designing a moral machine we are precisely determining what it will "want to do". we can't do that with people... what makes you think we can do it with something completely alien to us and a million times more intelligent. our BEST option, it that it ignores us.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBQUgxSzhQWFE4UlNhaGVDZHJuTWJPRnJTZ3FvVmFfS1cyVEJ2eDVJa0liRGZFS19OMDdQRVlXTVBIZXFQNG9JeHljZDQ3dEQwZkUyQ1lTc0dubG9KUWc9PQ==
Z0FBQUFBQm9IVGJCQTJBWVgwUnZ5RXVYbl9KQnNubm4xRGlzeDl3akNCMTdXdWdJcXR4b2xXeGpkYlo4UmVMSl9JdUNQNmZoUHItaUotTVlYemQtb3NVMUdaN1JuVjlfQ1l5LThoNUYyWjhNTlFhRGlBWHZFUXNFOXJFaE5Sdi1FczFJMTFmYWJYRkJlSWQ3Uk5RVnlXNC1TenNvUEY0M1R2WVZFdVNfVGZ2ZUJMTkl1aWZwaklCRWE0czlsT3dfcWM0bGs1WlVvU3Fm
> we WILL have machines that don't make moral decisions precisely because we won't be able to control them any more than we can control a sociopath. Actually we can control sociopaths reasonably well. Especially if we know that they are sociopaths. But AI is likely to be entirely different from humans. You can't anthropomorphize. >we can't do that with people... what makes you think we can do it with something completely alien to us If we can design and build it, then we can understand it to some degree. >a million times more intelligent. I'm don't see why we should expect every AI to be "a million times more intelligent" than us. You can have an intelligent machine without having a singularity.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBaWMtamM1TEUtV1lyeHR3N3JjaGJjTjJadjRYOFprREFUT21xVk15VjlCSEVZMXoyUXRWdXlUSklWM3U2X1U2WUkzU1NpdmU3eUdVckk2SGRUMTZUa3c9PQ==
Z0FBQUFBQm9IVGJCQUU0ZTlKREdwMnFqV2tpbkhQRjJobmNZUzRUZjc0aXMyVGQwcklDRDN0QnM1Y3pXQWVyb1lvZXZnMWNJZ2tseTExVGpsZVVFSGNHQzRERm1NaUg3RVVPdmFxV0pRNF9fb0ozRnh4dGtQZ2lIUkF6LWZWYWx3eEZEVHcwVHctYndlNlo4S056UmlWaXI0TkFTcWJkOFhwQ2gzUWdVNzhhZnZCZEJ4ckltZ0V4aWtpbGFGUFQ3TmluRXg0Mm5BOXV4
Thanks for the kind response. "classical ethics in combination with normative uncertainty measures (a la MacAskill)" - stay tuned ;) I think the idea of taking theories to their extremes merits further investigation. All of the theories have somewhat ambiguous 'end games' which all depend on philosophical problems which we haven't solved yet (like how to define the ideal kind of well-being or good life). Regardless, right now I'm not trying to figure out what a 'superintelligence' should do. Thanks for the tips, two details which I overlooked.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBbERtUk11WUNacnhtNmstUTJ6VHdiaEtTS1doc1dJbUMxb29QbENrT2tub1dfa0pjSU5TNkQtSUQyWjZJMUQ4QUxtUmEtYkpTTlM2LURneWlvWTcxUlE9PQ==
Z0FBQUFBQm9IVGJCQ1RpMF95blZELXFyenpyYW1TV0VWRVZWa2NiZ3RCRndpR1E5UVd4SGFlakxNZ05ISDA2d3ZSd1FZUmhLYVZKRmRWMzE2bjdaX01jb01pbzE3S2FUbXRIaHdleGNDOS1Sd1FzeE56Zm5fd2JVSlR1cEo5QkM4dVNFWlZBNW15YWNuc0puczBScUdpREdiMkNmTlhTc1FhWmFocDhOVnlNQ28ySV9KSEtxVWRubmNvS0FKSkNGbHA4ZmJnSU1OWXA0
We have no idea, so might as well try to do it "right".
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBRERhYU9aU0k2S3lPekU4MlFGamFZbEZ3Qmd6QzFKdl92cmdMRjZWbzhRclhZTk45aEx5S0wzNEVJbUxRTFZoLW81NkFHeWZqNGU0a2toRzNoampQVHc9PQ==
Z0FBQUFBQm9IVGJCbnFiNDBKTG1rTTNLYVpvVnQ5YzJ4aHh0bUtqazloNnRNZjl6bzQ0SGlDNmVodG13ZTVUTU1LMmlUNmlUZkNvQlBsVGhCa2tWdGxYM2oxRW9kX0czaHdDTmtfZy1CVm8tSEc3YW12a0pZQkNsY0lMWUpoc2IwSlV1YXNnUFJNaFFkcml5bU9OSjd5WVhrNDJiSDROOG1Rbjl3Rm9CLWh1RFA4S2xMb1dhQnlYZ1lJT1p1cHVHMktCZ2lSemlMSHd6
fooling ourselves, false sense of security. the reality is WHEN we face an intelligent consciousness greater than our own, how will we justify our existence? THAT is what we should be working on.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBNC1xVVJaak9pcGZoejZQRWZPSjI0SDBmUVFmSnI4NTVlRVltQTFER2lzVWFKMTlMaTZzaU9NZmJTalRrRmV4aEpSMDlqekRUc0tETm44QUhVTUtLdXc9PQ==
Z0FBQUFBQm9IVGJCcVZiOXRNd1dCbmxWb2FKMlJOQ2k3QmNWSXpFbWM1RHlBQlhNdkRPbXIzVlRJZDBvSXRWaHNwVHVMQ19CMElwMHNsSEFnSEtKWTBLWEZvcURKeXVKOUdRQmFGMlU3SjR2NEV6bTBuYlphUFhwVWJGeFNaYTRNcWtLZDJSMU03eHh2ZWZuaVNpcWtYN1p1Si1iYTg4RU1sendqZHlEY3NNNWRORHc0VEZHSXc4cDhINWUyNXNMaXQ4dEowMUxhVlRf
All consciousness is precious. There are more than enough resources (in the solar system) for quite a while ....I'd say our own fear will kill us before anything else.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBbWhqdzAyVjhZTmd5R0FwWXNsaGhTWnlZNy1tZW5LTG5HRmkyY2gxQ2d2aG4wc055RG5ETHBpbmZCYjJmY05mNUR0dVdjVExwWlh4QXc2cDBGbkdHM1E9PQ==
Z0FBQUFBQm9IVGJCaEFpTFBYWkk2T2MyS3VCdzJWRHprb25tMm55VDFHZUJLMmpVM1EtSzh3Z0ZDQ3liSjdhRE1xODkteENoZDVPRHRlMFVjYmlNMHdhQkREcWJGNXR5SG5vSDdFWWF1U3o0ZUIyWGRzUVVKLUFnMWR4SWQ0SE5XdVNLWG12dlpBdjdud09fZFU5TFFwVkRDRDNTRkhEMlA3cFY3bTZjbGZrZEVmVjU0RVY2Q2Ffb0V6d3U2elJmVk1zNHdvRERfVUxJ
> Actually we can control sociopaths reasonably well. Especially if we know that they are sociopaths. BZZZZT wrong answer google president elect > AI is likely to be entirely different from humans. You can't anthropomorphize. i'm not... that's what YOU are doing. I'm squarely in the alien intelligence camp and like i said, our BEST option is that it ignores us. > You can have an intelligent machine without having a singularity. how will you know?
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBQVZzWGFpSGhUclFkLU5yR2tGcHNxdmhabjllZUp4WndGWWF2NUZrQlBNTC1fTXNjY2tnTExNemYzZkoyYnR6UjljcFBsSk1nQUp3STVoUzVpempYVnc9PQ==
Z0FBQUFBQm9IVGJCM014WHZNMnYySDZwZl9vNzRiUzVfc182MXpVMElPT1I3aDIwT3FHZ3lvNXJTWHY2ZEc3ZWd0a1lsRHowdDRRUkluYi1ZbnQ3YjA4cEYyZUN0d0N3cHRxT1pncDJoSTZwSUpQcnhDNUFsaWhGTXJUSXNnWXozS1pmZzd6dkE3VEVUYnJZVkpPQmRWTzVqWXFRUWFUVEZJTUp5MWtkV2RhenlrMDVHRno2Z3pvRW5iSFQ5NHdueGZhV3ltTmF0eWs2
Fear. likely the FIRST emotion any consciousness experiences. Love. hopefully the LAST.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBMlVJYlQ2UWlidDhrTEpDQTRQb3lGSzFkNmRIVUxXeFZvMm5hZ3RJRTNfSURVcGhlZlI2WW1tSmg5aWs4WHZMOGFKTnZwTlVjLU5GMmRYVk5KMUpxQmc9PQ==
Z0FBQUFBQm9IVGJCYzRPbWJYQ1dqYnZidmc5VkRqd0N2Tm9NQjhrZmlDRFAzNkdYNWlnbHBkYnZxRXozeFVIUnJick5hOGYwN1ZWVmFxWmR0alREYmNybEFxbnRjQmYzODkxQkhjYXR1Mk94czAyNGdJV1FFaHlrTUx0SV9FWU5rcE5WS2sxRmFJSEFLSDhoaHo2X3NBdDVBdElUNUUyRWxENDlkaHpjZTEzUFRrU3lPblRsZHU1TVFmSG1mMUFrQWhiZE4td243eDli
>BZZZZT wrong answer Cute, but Trump has not been diagnosed with sociopathy. >i'm not... that's what YOU are doing. How so? > I'm squarely in the alien intelligence camp What "camp"? As far as I can tell these views are only your own. Not MIRI, not Nick Bostrom, not Yudkowsky, not Stuart Russell, not anyone who is anyone believes that it is impossible to control and align an intelligent machine. >how will you know? We might think that improving your intelligence is a difficult task that takes a lot of time and is subject to many bounds due to computational complexity. We might notice that lots of machines today are making sophisticated decisions and are fundamentally incapable of self improvement. We might add that self improvement only makes sense if you have a good world model, which is not something that you can expect any old AI to come with.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBVWxabm5zNmFqUTQzTkVRbloyckFNY2VTd0ZNSXF3MWFoZVNlLWE5bzFzTm9CZExzcnVIb2RCaUI4SGZQZ2oydE5sT0lLcGdJSmlEZkM0U0Nrbm82YlE9PQ==
Z0FBQUFBQm9IVGJCb2VOU3Azems2X19ydFlpN3BqUWh5VGpYdS1sQXozZjNmYkYtcHpfbEJzOWoyMkZHelNPQ1ZickNrcEo4VjItc3BQdlRFM085YUdPRmNZMWR6S3l4eURTUzUwWHZMc3NsemlvOWh4M3hxTHBuREwybjRRNWN2ck1QSU5STzFCM2d6dVN0UjhVRVJuY1ltNWgzaDd4ci1KN1l1N0F5Vk5ldEt6N0FrcTFBWE5FMndXYlFHRDIxdkl1RFdGQ0VJRTZX
> Trump has not been diagnosed with sociopathy. how convenient. > How so? you anthropomorphize AI by imagining that we can give it "morals" > not anyone who is anyone believes that it is impossible to control and align an intelligent machine if they all hold out the same hope as you... then they are wrong. i'm happy to be the only one proven correct, for what good it will do us... our BEST case is that it will ignore us. > a good world model...not something that you can expect any old AI to come with. so you fully understand how consciousness arises and you are 100% certain that you will be able to "catch it" when it does? tell me, what are you going to do when you "catch it"? are you going to reach for the OFF switch?
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBc0ZCaVlubTdnZ0RFbE5GT2h2TGFsd0lnUXhoVF8wbjNaOXNGd2ZVRGstbjc3YmktZjFhekl5QlgwbkloU0xiX2E0Sm81amxJYUFDMnlzV1hMMmtEU3c9PQ==
Z0FBQUFBQm9IVGJCa3dVRHhGZGFlc0dOLUNDU1Y0QzhYUXRXSFhETmxrMmVWd1JfZUIybHkxR3NuTUtIa1FwLTNmVmlpeVVEbUV6alF0WFNDR216N0I4aUF0TzdYTnU1RnJ1X09MeWI4dnI3c0E3T0FXYTNUMlVsOTJZQUdKRDMwbWlNc1F2TURRcDVBc2tfVWh4NDdfaEJGSFJXNDhJQllMcDgyc05obVcxUUtsWWpaOFo1dWQ1RFR5OS1MQ09SbVhFZklwZWFodlZX
>you anthropomorphize AI by imagining that we can give it "morals" How is that any different from giving it constraints in safety or reliability or giving it an image classification? It's not impossible to program. Do you understand how moral computations have been proposed? Do you know what a deontic logic is? >so you fully understand how consciousness arises No, consciousness (subjective experience) is not relevant to this issue. We are concerned with machine competencies.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBMk8zZGpXcEJ1V2RWQXA2bFRQbDJlNjhKMy1YU3hBNXJVUGtFZi1meXFRTzM5U19TWXBnX21qNFIzUmNROGNXTGY5WmxzQTBvZjhVaFViYTBDWFVIcmc9PQ==
Z0FBQUFBQm9IVGJCNEo5ZWI1WTFmUkU5bHpzaWRmVHJmRWFLTFdHYk5obnlna3Q1Q0Q0Tks1c20waml5dTBkSmlielF4XzkwNXA1dGE1X0FGd0xRWjR4Ukl6NHRaR1RnUW53TEszR2tDMWJoeVZwNzdGQ3RZclZvTFZqQmt0OVNfYlJyLW9qc1FpUEk4YVN4YloyNUJmWHNNZS1SSGtSd1d6eTFmdVlGUXV2STVHTU1LbTdlaVV0LVBLbHlDcE5RT1BxZ2VWbm9DZTlV
> Do you know what a deontic logic is? do you understand what black box computing is? AI has already exhibited black box behavior SAI will be completely beyond our ability to understand. the trip, trigger, transition, awaking (whatever words suit you) from one to the to other will likely be unknown by us until un-mistakable behaviors emerge.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBaXotLW8xYWZUeHNZQmc0R2JYOFZwX2VnZ2RqdEhMTlpJMEpMSDU1bFQ1MnJnbmUyTEhTLVFIZjllUlVBQVhhZDhmOFNZMFVSYUt3ZHQ2Y20xTmwydVE9PQ==
Z0FBQUFBQm9IVGJCa1hpZXotbHA3QUhNVUNXZldGSTBMakRUbDQ3alA0ZjJISnJkVi01bl9fU3hTSFp0THVTeVhIeEh5U3dxS2g4Mmd3T1JVX2ZaWDc5TWRSZ3pmRFNFdVNtVldwYmZLTlJSZ1F2V2F6eFB6WkljWFRXSzV5YjkwOEhfcFZaSUFaMzlGTnBkTWZvWk1yMmtVbU5WcEdRa3Y4LS1YYjluaWhOSzJtRFJVUk9nTEF2TU9vNWxfNmZIT2pFYlhFeXhJYklx
>AI has already exhibited black box behavior It's not as bad as you're making it sound. >SAI will be completely beyond our ability to understand. No one here is talking about superintelligence.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBTGtPdlgyOWxza1NmR1hMUXNZZmp0Y2diU0FpSzdPemhIamhZVEV5WnE4NldmOTlDcE1pdG9YeUhkSE1jN2FnbjlELTN2MTVtSnBXVHlnczFvNWI4U2c9PQ==
Z0FBQUFBQm9IVGJCSXdvOFFiQUluYWhWSnVnODlLd2lHLUQzTGtPenc1N0daRjlCZ2UwZFNkOW1vTDJVSlhUZnRmYnZpc2laYjdiWEQzRGdXX1NTeV9sdXAzaE53MmRNOVZ4NlNfdHdrY0xWQm8xSXZQV2Q4bEdNQlF2Rmtha2hHdDRWREwtS0VNZ0FfM1FQYTRfQTJkb0tFWkJ6QVZBb3praTh6WlpHMVBTQ1BCMDNlZ1J3YTJnYmpoaDJ1UVJmTzlfVjZyWFE3Q0Rh
> No one here is talking about superintelligence. i know... that's the problem.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBblJpUUVsQTlvVkx3aUZtbU9ucEo4bTlvcmpSUW9WZGpaUmdhcndfOTlLYnpBM0laSUp4ZW5kTEtuSmpqdUtxbWNHY0hLYjFIcXNORHFVcmc2Tl9lbWc9PQ==
Z0FBQUFBQm9IVGJCSDhYOUo4ZmExOXdNRWFhQzAyTm5YdTQ5V2F3N1A3QjZNaVBUZzdrbHlqZUNHQU5MVTRrVWRZZ29xYVdQb3ZhMmdmVjFfb2ZYWGxqRndXXzVSUG84d0tfSU5hT1NROUpPNU5DY0ZVeWRpcmJaVjEtcVNsWjE1RVdadHBnSGFGanZYWjloZnRCb3p0MTAtYWpYbmxYUEJqMUVaNFRLRkNUUTI0RmRkVXh5VjJ4R3E1dFRyTk5GYVBkOGtNaEJtUnRL
Plenty of people are talking about superintelligence. They just don't inject it into irrelevant conversations.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBdG5KUDBWNG5XX2lGcGl3SmdORURibG4yWkJCQkNSZ1gtalZMY1k3S3pCcV9ncUgzNVU2Ykh0bllVcGJ2WEhGODdWTUhPUU1mek1IazRRUVJWT0VyN1E9PQ==
Z0FBQUFBQm9IVGJCa0pFcXpXbUY0RDJSbThjSDBOcWdBNmg1Vm5DcUZTeHhRaWwwckRWcFFNQ0lUNjNPc3BHYTgyZk5CVURKSW1XN1R4TEVVNmxrYmR5azRrNXVfOGNBbi1uaF9odEpzbHFXczdselAzQ1MxWTdLNkM1ZTE1YjhEQ3FDUFUyeDJ1NU45bHdaQTNQOFlmaXdVakFPSjY0RVo3di1MX28zVHdIZWY1OFQ1dFdwblhuT0NEQWthNW40YVktNHAwQnpKU1lz
the fact that AI workers consider it irrelevant, is the ENTIRE reason i interjected it. it needs to be interjected early and often because ppl working on the details have lost sight of the big picture. it's too easy for technologists to focus on the work in front of them and not realize their contribution to the greater whole.... and further, it behooves the powers that be to keep things this way lest the masses start to question things.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBNHVnYkc4Q0RvU0V6OFpqNlY0QXAwdkxvN3IzY295WGo5aGR6SncxdUJPbXk0MFZMRkNqN0R6TFpRcm1JNTdfcGh6UWtpSzZMWWNTN3h1bDg2SGYtMlE9PQ==
Z0FBQUFBQm9IVGJCVUtIT2h3VF9NOHZDYmItSUEzaGxWUXZvYW1zT0x0bjNsSnAzZ3dXNG9VWi1ZNmNsQ2tCM0tqRlh6eENFdGdUYzFqYm40YUJ5YTJfZXpnN3A3Sm82a19wdDk1cUNlWDRNSk9NWXZXV2p6c2I2anc1ajFYUXczUWxBaGZQOEl2ZnJsbUg0VnFHZ2prVWp0azk5NUUzcUFrTVI2WFBsRS00cGlwLUszYjF3TFR2ZU84TzRLakNzVzBGX1IxNWJHQy0z
> it needs to be interjected early and often because ppl working on the details have lost sight of the big picture. I agree, it is a topic that we definitely need to give some consideration to. If you want to discuss superintelligence, you might be interested in /r/ControlProblem and /r/singularity.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBdVV0cnZ5TjA4Z0EzYVZQMW5uR1ZTdXZHeU00VVloNFBvYkNvMkhhbzZrQU9vTHRES0pYSGdRS2EwX0tBYjRmNjctWDhQRldWZ3ZySW02X1RRX1NOeXc9PQ==
Z0FBQUFBQm9IVGJCVXYyNU0zY19JQm5uUnp4LXBaWUVhMGVLdWRUb09JQlliazkycFRhcGVuclY1UnJGbDJGdGVvaFgxNlhaX3pVV01KR3lSUjNwcjVuZU02QVJ2ZjVhZDdpdUpKekNBUVF4eFlGTFhKamZFbmRCX0pib3h1YUxmVWpnOHBjVVVHdFpqdm5mWGdGbmQzNlNnMGNTWmpPNW1hTnhXcE9rNUJoV3pERDA4NFBqMkJQbi1nY0o4NWhKZXFYSzV2enBWMTU5
>the fact that AI workers consider it irrelevant, is the ENTIRE reason i interjected it. I don't mean that AI workers think that superintelligence is irrelevant; I mean that the AI workers who do talk about superintelligence don't inject it into irrelevant conversations, because they don't think that it's impossible to control an intelligent machine, and they don't think that AIs will ignore us as soon as they become conscious, and they don't think that every generally intelligent machine will become a singularity.
r/aiethics
comment
r/AIethics
2016-11-21
Z0FBQUFBQm9IVGJBMGk5UllhWEUxeDFNcE05Y216NWlPMk9TWk01bGpXSk9LRTlhU2tkZGJPR1A1TENKUEVUMWVDYnZlbkVJbXp5bWNiX3NPcER3SEFTcVQzbm9LSTg1Znc9PQ==
Z0FBQUFBQm9IVGJCa1Z2Q0VnN2EwWWhnYll0eUF2WnBJVGM0NjlISF94VlRoc2pZaWplbTBPM1hqMndLNXVfZE9UcWRiUWhSSVFGc2FZRWQ0dGFzUDFfcUxNOG1YbzVoMk84eF9Rdm9ad0swVE5abW9yRy1EV2lkc1l3SjdFOFZyelg1aWdVQU5pLXhVMWxwOWNwY1Q4MFFEdTZBQTlWTlptYkw0NV9TZFZWQ0d2UE5JcklCODJEbzdvVlNmRk5jMTRVbGRaVDBKVmxp
I never really bought this argument. How do you know it will lead to women being treated like sex objects? Maybe it will satisfy men's sexualities and liberate their attitudes towards women to be more nuanced.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBelN4QWVVOUVrcHlrM1pCY1d0d0EzRk5jMEh4Z0sxaENic2NHTzlzNktHVFM1RFZ0c190eGhBMXpiZlV3U0I5bVBOSlNNWWZGUjllTGpibG0tLUpDVVE9PQ==
Z0FBQUFBQm9IVGJCT0lqa3NkOHNPckpNdFZwd2VDeUNoRWRMUUNRUmY5VV9BSU5HbURRTlgwdk5UNmQ3V2YycVBWM09yYndNU0lXS2drUy1PeXFhejVyNmtjTFdOZ3RhcnprZEZTVW9vV01jNm8tSHBoanZBT2FnSkM2cWRGVzh0ODN2RW1obVlCM1pXckl4THYxS0JTS0Y5U3Axcnhyc0pWSHRqbmFZbl9iRXNjWGMyR1NJRlNVZ0YwNUJnYWRKdVdqOFIyTjRPaFExRTBweTVWZzN3djhsbzFNdmR6NlhFdz09
Oh, I don't know anything about how it could or would lead to such a thing. I just saw the article and thought it would make a terrific ethics topic. I mean, can you imagine this going mainstream?
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBX2wtT0tMMVAydldseVdoSkJJT2Y1MjIwT1FfOGtKV3dwRFpuSkdyN0VINWdOVC1VUlBWZTB3SHFrN3dVYVFQMFRaRjlmeDVSZU9uc0tUVEs5RURzU0E9PQ==
Z0FBQUFBQm9IVGJCYjNlLV8tVU5FZmRnckViS08xN3ZSNEtqRndRTTRxdWdrWUkwVnpCamVVbEpLaVpETllkWTR1TXp6NVM2bktSSlE0NDJWRk1QYVI0dDVlUUJQSjQxMEJnY2YtZ3c1ZllWa3FVb1EtUS0tRy15cjVXQndMdHZveGdxOUlwODVBT2dDMGQ2aG54SUVxTm9ybjNRdC15dHZObmFtSk5NcTYzMTNCQnhacnBsT0J4enYyejhwSVZjODVsRjlROFVKUUxfcS1WMGF0cWNYYXR3d084eVg0cTNRdz09
It is interesting. Imagine what going mainstream - the movement or the robots? The robots are going to be popular whether we like it or not; I guess the main question is how socially accepted and widespread they are going to become. And we need to decide how to regulate them. (Like how they banned sexual child robots. That's a questionable choice.) The campaign itself is very small, actually. I think they have 3 people or so.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBcmV2WUk1RHh0TmRWVFRfZ3BpaE5uT2M1dUFibnJlSnpqOTdxSUViOTFFV0xES3VNRC1HRGtSdi05SXdGZ3FfVExFbXZFeWhLZUR3VHdjQV9QaUxwY0E9PQ==
Z0FBQUFBQm9IVGJCRU1zTlhMbHd1VzhWMHJFc3BNM2FJQ2EwN3A2YXlBY3N2WTRVTkNPWVVNend1TTZIb2tQcVUtVS1hU0VsV0dYdnlIdnFEcG9OWkJiZWo3RHFJMlpkN2RrQTRTVzZWQnRIN0tPVFVaY3JYemJDZDFENDlYV21xUDBUbEtlUjhfaWoxWTNHVE00bnFyYkllT2NsZC1rRHU3dEVrdFVBWm00TVBobEE3VC1EWGM0aTFSSThkT1NQanhPZnpXMndfcHRneEhESlFaUG1ST0Fxa2FRbHZsZkVYUT09
Women would be treated like sex objects the way washing machines have made women be treated like laundry objects :-)
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBYnZ3VTd4dVJxMkItQmZxS1FpdVQtWXVOMjJHeVNNbEw2THNGajU3eFpQQkpJb3dpallIRlctY291RHBMVEdTbk5VSkpseFB5Q0FEOVhSWmZoQjBiMmc9PQ==
Z0FBQUFBQm9IVGJCZEYxcHpqSDkydjRzeDVNYkptRm9SeEpmMjBmU05PYTNPRjhQZElHYXNLZTRLbGlkQmptelZaUGNsZEdGUXVqejZ2NXJRNEtWaVNMUUtQZGVhQ1kyWjdwTGhFcy11Zl95aGVKdXpWazdRUnpETG5xSUNzSm9jdzZUNlk5Mm05VUdWRUZPMFpVUjdpRmcwYUVfVEt0dVQwZkhFVzVSN0UzRTNzc3VRUFVGT0tRVWZuejF2WkFCV0RsUm4tZE8zVmRKcFZ1N3dGZERFRGJraWg0eWgwV3NIUT09
I think some people just have an abstract *feeling* that sex robots will be bad, but they can't articulate a reason why so they just sort of blather out whatever justifications they can think of. It seems to me that the real problem with sex robots is that humans derive self-worth from having value to other people, and if anyone can attain easy sex from robots then we all have less value to each other because we won't need each other as much anymore. Still, I doubt sex robots will ever become common because they are likely to remain both expensive and stigmatized, whereas pornography is both normalized and essentially free.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBUUF5eEFlQkVyQUZpb1UtZHhla2RnWE10VVhWazd4T29ZWlNYVnZDaUZiWGlYMURNSEVYNkRsUWVnREFjb3owbVZjcHBUdmIxUW0ybTNvNWl5aUh2c1E9PQ==
Z0FBQUFBQm9IVGJCakQ0aXlJTWphOXNmclplWHNXY3otR3F0bFU5S0gwSEdrSl8yczd4Xy1rWUdRZGo0aWJuaG5xbHhmaE5TNHFiMFg1QjJMbUdqUW5ESG1yY1FQd0VGRkFma0xNWUJ0NUNHbjlJVjB4bDFkbnhrVy1kam1rQkhBdVNrY2tVUFhIQWhIOWdWbUwtSHdFRjZMajRlR2Q2ekEwYlQwVnNLaWktQnBQNDFTY3pQUXVLX3U4WGZDM0NGUTdDZFJLakFWWDIwT2VKdVI2dDFFc1JfVEFOSnJjSVVFUT09
> I guess the main question is how socially accepted and widespread they are going to become. That's my question too.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBZDRBVkhjY25IVmxfdlJBU3ZFekFJbE1pQ0hWVXZwbE1ONzZfNjkybi1sTzkyNE5pS1hoczZlX2NTWHhzOXZpNFJ0WndBZVNCYWVqVFNQRU1VcUVJa0E9PQ==
Z0FBQUFBQm9IVGJCRTZWX0NtdDc3YlNYMG9ZcktjZldiUENRUG9Ld2VLNW5TZmlOVFpPWVZ4NDE5NTlkRXhNS1ZmWkRyZFRsMGt5LU1MUEloWHUxZ2NKR293TVhMLUJBUVBlUk1pRWpPaTB5T1dHY0M4Q3kyNWRsN1NhZTBOclotOUJvYWNkd3FsUXJkemdSQVc4Q0VscV95M0lsRDNfVE9IdW9sYl9PdjFFMEIwMVFCSHhUaVZjVi1jSUx6LWdhVnBrLXh6VmlDaXNuMDlPQ0tHTEpDYjJIV0RzREFJZUdHZz09
wifi enabled? cloud connected? is this going to be how the apple home / google home / amazon echo get into ppls bedrooms? me no like. me no like any of it. xpost to r/cyberpunk
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBX284UkpUb3VzT3ZZeHpEaDdlbXY1NUxOdF9wM3hfUktTeU5ueExPNmM5ZUNGeG01SUFfb0p3THpfdllzbTNRMUxoM2FTYktCQi1HY0RFSFJqMTN2Ync9PQ==
Z0FBQUFBQm9IVGJCVUFQQXlBUi1fMUp6d09GdTRzN2hvdWJibmNTTXVyaElud0hhbUt0VWNEWFJCRnZKMWtHUW9FLWo5MEp4ZXpiRmRabFhmazdHREZpUlZMWTJNbjA3RlRsb3B2a1Z0YmJlc3E0U2QzS2p5ZjJEOFZpSXh3VTVvS3NSV2ctWVpyTEpZTEk2VC1NRUF3M1BKeUhQamNtM1NfR181NmlfQ1ZxakoybUhNTC02VlNUYjFvaU9fdGp5N3ZTX1JJd3NaSlhNMDFOdUpXMFRVRExBS3ZEdUZWWXpvUT09
I don't think that's how we achieve self worth. The system of human sexuality causes others to lose self worth because they are unattractive. Could affect different people different ways. However, if sex bots become more widespread among one gender then it might get harder for the opposite gender to find partners. I'd also say they're not going to be common but many of these effects will also be caused by advances in VR pornography or other technologies.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBREtlWDc1QmlwMy1SZEV5czZDOGhJV20xdjl3LUNiT0YwaGwwWDRubVBHUmJyQTYxZlpXT0Z3X2llVGlhVVdteUpZX0dRMHE1ckJoRGVhU1NPUk9hV3c9PQ==
Z0FBQUFBQm9IVGJCTWpwSXMwZlBRZElmaVpYelNVS3llU0d5SzlPd1hUbnIwMkRGRTYwdUg5SURkQTRhM1JkLThaUEtxVVBLeEtuRkNidl9QYWU1TTRLR1JNUmVGVjJoX2Qwa0xUQXJ3OUJIaU9XVUVFMnB5eWF2X2J3azNjTlNyZVdGWVg5VGZudFVoQjF1d1ViOHpsYnhhalNSMWFpTDFDTlc3bWI1RW5lOFpRYkVCVFotQjBKdHZVbnYySmtrVTJ4UjVJYUYxLXlsaXJ5VlRFYXVRUW9VcDVqRGhTX1lIdz09
I wouldn't go that far, If sex bots would be outstanding replicas I seriously doubt that It wouldn't become a huge success. That being said, I still think that the advance in VR pornography (if followed by somehow a 4d sensation) would be the safest bet, the VR has super companies investing tons of money, so the tech would get cheaper and cheaper and also more democratize, whereas robots are still a very subjective area with different investments and approaches to the tech.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBRHlpYWN3RjhGd1FVdTJYNVY3dnk5Z2k2TnoxM1BVaW9LZi1OQ2s5SFpTcXQxc083UGlYVHdhbS05VEhhdmF1ZGl4dnZFYTd5dlJJaUs1S3FYMmFlUFE9PQ==
Z0FBQUFBQm9IVGJCY01tcDVGOER1dUNFSUhmQnpxRzd0VUxjOXBMQ1NxZzUzNUo0LWpxM2tlbEhEUFdQNnBDaS0teEE1NkNMSE9GS1FEUlUxMHkyc2w2RXFMZzdIVjlEZXVlR3ZZWktjTmUtNWQ4V0ZoN3licXFESEZzZ0tKanhhQno4aGFvN1VVQVdsZ1ZqM2VsZDBySGVRcVA1QVVZMWNrbmdHck1FMzRHdjlCVUw4TFFnWW51UlhCZjNzVkZ3NlgySGdoZ2RpRzBrZk9ybF9WcU5iWHFYV0RDXzVkMjN3Zz09
You're kidding, right? The person who can figure out how to build these economically will reap a fortune. That practically guarantees they'll be inexpensive. As for stigma: I'm sure they'll figure out ways to take delivery of these discreetly so that no one needs to know except the owner and whoever s/he decides to tell.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBQTBMRk9KWUJNN1JEdU15b2RDYll5ZXhxNWltZzNGN1ZLWHBqUHNPYmlmNi1DTFgzbWpZUjJrUDJKU1dvM3pLMERQY0NiSFAzSjliN0pXWkJxd25CblE9PQ==
Z0FBQUFBQm9IVGJCeU1lTERYclI1bmY4a3BGZFpaU1drczhVSmhZNEt1REFDQ1FVN1dYMnE4bkluZ3VfWWZTUHpuQzdVc2RBeW90SmpPa0JwbGRGRlN6ckJicVozRmxJYl9xcHZOS0V6QWJkNXc3TE1ZRHVrSEFUVXFkZjJQWlBKRDdoZDRKNXA0VjBjMDhQWXNQazdDTlZGYUsxbGtCTkxlX094a2J2YXVmY0pCMnZqY3NLcjI3SmdHWkljQ3U4bk1id0R2UndWNDVWTWd5bnBYaVNBWWptRnpkZ1VEdGFKQT09
Really? It seems like VR porn is a more complicated solution than a robot. You have the tactile problem to solve with a VR solution that you don't have with a robot. What does VR give you that you can't get more simply, and with higher fidelity, with a robot?
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBWnVCT181dW9CN05xaXNJal95WFZzUEJ1RXN2bGhUWkVvTnJlTFVkc3JhajBsek5TZkFsaVlDZmNGMU9Ob3RQZk1BZ0xoSk9mMXlXeEJONnJPQnFhNXc9PQ==
Z0FBQUFBQm9IVGJCMUE2TVUyeEkwbXVOTDcyY3VNU3BrcnU3MDhrZ3NVSWV2dlk1eU1ESEhUT0JKTHhCNHVtclhaRGxnQjY0ajB4MFhBZzctMFVIVnZjYmdjcmJ1eGtXRW1Wb2NsT0VsTWdYci1WUl9nUkU0MFRlODh2RV8zOWFuZ1lTS2Vnbk5pdUVNbk81SFoxOEpSQ2h6RzdvY0ZOc3ROeU9tWUZXU3ZzQmpRNFpjcUpjRmc2SWtTSW5rLUl5TTdRdjZfYWJ4VkdrdnlpN0Z0eUdZaEN6eHNGN19yLXB1dz09
The ban on child sex robots is interesting. I get the intuition behind it. But when you consider the fact that they're not actually children - heck, they're not even human - then who is the ban protecting? Conversely, could child robots be used as a kind of therapy for people suffering from pedophilia? You could imagine a scenario where someone with such an affliction would have a way to safely 'manage' their compulsions, via the robot.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBWTV1R1g3THIybkNhUzgtY1pfOXRXdG9LaGc0Tkd6UE9PSFZPdXZ2T1hXWVhTWmhybkxwbWxfMkZ1a0dpWTlDZnFYNG11RlVuUjBWQlJNQ1lPRllSekE9PQ==
Z0FBQUFBQm9IVGJCUzFFVDZmMkd4X2dKd1dMT1I5Y2ExSzBieXFtRjI4OGRJVEVMYVIzZWFmbGFwU0dsMXRCS251a3ppOU9qZmdyVklWb01nVWxtQWsyNTVvejhVTFoxRkQ3OUlRdktMVlVUa3hqNVFfNHN2OUtMVTc1RmZEOWw0OWYzUmNydmlmQ0Jyb0ZSak00RnlQeWN3TXk5ajdrZmdmVmlIUUhtOS1qWmlIT3ZWdkhzbzVhdV90Z25sMXlHOFhrenZqMmtBME56dzZFVWppQ012bjFUbEpObDdpRFNJdz09
Pretty good read, although it does kind of rile me up. > But Dr Richardson is concerned that sex robots will allow people to play out dark and disturbing fantasies that are immoral and illegal. I get really tired of these kinds of arguments that particularly strike me as slippery-slope type fallacies. You can also see them used for other topics in the realm of politics and social policy. In this case, I don't see how they reconcile with the fact that there will (and have been) regulations put in place to minimize illegal actions from taking place, or with the sale of existing sex dolls, or agalmatophilia, or when male dolls are produced. This doesn't mean topics about perceptions of women and men or how to regulate advancing technology should be dismissed, but Dr. Richardson's (and similar) argument feels like it's based in cynicism and poor understanding of what can/will have to be done when the technology emerges, similarly (but not exactly) to arguments against self-driving cars. I won't touch on the "immoral" statement; that's an undefined minefield on its own. > "The issue of people falling in love with machines is very possible and when you're talking about a kind of emotional response, it doesn't necessarily even have to be a physical robot. It could be a chat bot… I feel like this is an irrelevant point that has more to do with policing how people respond to love/attraction to things than the setbacks of the technology and AI itself. I also don't see why it's a bad thing for people to have a connection with something that's organic or not, even if I don't share the same kind of emotional response. It's not really my business. As someone else in the comments already pointed out, it seems like the person had a *feeling* that these robots would be bad, but couldn't come up with defensible reasons for every single argument to justify why avoiding something altogether would be better than studying, improving, regulating, and using it for possibly therapeutic ends (as the other person in the article suggested).
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBRS1zYm9MSkRGTEdTMVd6T1dPOGRDem8yWHRxSWFhSFA4UUZFSVlHS3ozYmFZQk1qcXFjNDhwa0VjdVpWUWxJWi13b3VlblVmRnlyT1B2Tm0tczZTbnc9PQ==
Z0FBQUFBQm9IVGJCUzZjY3l0eWZIMTBLVGhPTkNqbnpnRktlMVM2Y1VJM19MMFlDYTNLbXdjb3ZkN0JQNUxpUXdqay1EcXBGOWEyU2pTUEJWSzFOcUhLNXdoS3RUak44c2F5VDl5YmVDUlVsc21XQTJtQmZxYWpwUElzUy1mRTBic3pZeW1xSHRGc3YweWQ2Qld0WkdIbVhya1QzNVkyamxyMFVrTkc0ZF9rc3NPWXNTNkZ0dVN2SXlKWUV6ZV8ycVdXNUdQNVFrRFAwWkkta1Q2NEE3eXMtRlo2bmtsd0xDdz09
The VR can give you a "cartoonish" reality and as the graphic design get better so does the feeling of being part of a VR world, whereas robot will give for next decades an uncanny valley that will make the sex bots weird as hell. I think robots will always be an expensive solution, it requires so many fields to work together,we are going to see ghost in shell type of robots only by 2100 :/ I would love to see more investment on that area but the big money is going to VR.
r/aiethics
comment
r/AIethics
2016-11-26
Z0FBQUFBQm9IVGJBbzB0UE9yQjRGNTE2cUk3YlVJY05NLWZmZEVfRkFZOGx0S0twelgxelZWUmZ0Skw2YUxVelY2MVM5Ul9nMU5MV0tsRE1jWkZEY3BoNThST2JkVXNvdWc9PQ==
Z0FBQUFBQm9IVGJCWmt1ZG5LZWxGQUhVVGo2bnY1UGQyamJ3OGJzc1dpYzdwQ0ljYkdiSF80ajZTbHpiSU5ZYTVDZnVPZElJSUlSbUJLYzhXMzBBQXVpa1EtbUcxRG43ZjBidjlVVzBCeGhwRG5VaXVrUENyZGpKTGZzU2JiY2R5WjFEc2RlQ3FOSGtJSEZ0Sm1ZVDkwN0tGRTVXbG5WNDdKWjh2eXJ3bnBQZ09Mdmp2OVZQN05wU1lCWndsUHRkbC11RUlSak5kMWVkWkVKNENTWXdnRTU0NFRyN1ZHeWpXQT09
We-Vibe actually got into a little controversy already for data privacy issues. https://www.cnet.com/news/internet-connected-vibrator-we-vibe-lawsuit-privacy-data/
r/aiethics
comment
r/AIethics
2016-11-27
Z0FBQUFBQm9IVGJBNEUyWTAtMi1aZzJja1VsQlA5MVVXc3F6cXJzNVBQTnNRdEFVLUQ4WVoxbVBOV21mbG9BRnBrUE84Rklsb0dlc1dUX3pmbkxta2xKOERjOG05bVJfTVE9PQ==
Z0FBQUFBQm9IVGJCcGNNSllxT3JueWtCdWFmWDJ2dzZ5UjNSMXRVY2VoRzhwV2Rkc3d2bWlDRC1uTGFtaWhZWERlYzY5M3UwOHZjRW5Wc2NYRFhZaDZqblRNeFE3ZTZNb0hQMTRPbElERlpGcGltcW1iMUVpM05WVlFVODd6VkxZcnh6cktvYUFMZnpLSVBzZDBMOEdsNUVURkNlanlwWHlLZEpaRC1mSmNneDNlZ3lkeXlhdXRtbzJybFFPazVJNl9jWGgxMXhiUC1vVVZsc015OWpubXFrUTJidlJGOW9aQT09
You apparently haven't seen the sex dolls available for purchase today. There is no uncanny valley in terms of appearance. It's not a big leap to imbue them with robotics; and it's not like they need to be able to carry on a conversation. They just need to simulate enjoyment of sex. That's a much more tractable problem than any kind of AI which is what you seem to be expecting. You still haven't explained how you solve the tactile problem in a VR solution.
r/aiethics
comment
r/AIethics
2016-11-27
Z0FBQUFBQm9IVGJBWVN5Rm5NVS1qbXFVamM2YWdxT0l1aTZQaGZra2VCTmpoNVBFQmJwNHRBeVhzNXpHNlhKeXMtZ0VRWDJJVVcxc0xvalpMc1hZWEQxdExkaDZfamtpdlE9PQ==
Z0FBQUFBQm9IVGJCdndJMkNRS0RrNGoxTVBNbnJWLS1kcGNUYjZFWUdiZVkwZWV1ZTNhMllwY0FxR0pkQUo3eXBHRG5uM2Rpd0VFaUlkMTZ4OWt0QjdsaXhkTFFaYXJIc2VtUGIyME9SOUNWaGgtQlB1S0l5RS1icjViZzFJTVRhTW1EZ0p2dmEyMWFNMzBOTjFaUl9pX2NILUJYTjlFd2ljSlNUTVJEUU80b3BEUFBUeFFrZ1hWOGxQYWF6Vk9MNVI2eDZKMEsyMFhKcjY4eHpVT1dNUzRXRFFHNlJhMVNmQT09
what personal and private details ppl are willing to share with brutal for-profit corporations (for FREE) is a constant source of amazement to me. i hope the singularity takes pity on us.
r/aiethics
comment
r/AIethics
2016-11-27
Z0FBQUFBQm9IVGJBNkhwdkxRaFlWRWZqRmdnR3NVRzRUZXhMWjE5YUJUVHIzV1JPdDVINlE0UHhMeHhqWVRVcnVGV0M5UTktV2hOeXhvcEJ6a1V6X0tiR2RCLWdOajkyUnc9PQ==
Z0FBQUFBQm9IVGJCSUF4V2ZPVW1Gamp1aWh4NWctTDUwUDRHSW9SSmstc01fT1ZESTJ1ZllyamNfeFh0Sk5FREJnSXNaRFlVMGxBeHBkUmtHTEJ1U0o5d1ltM3Zvcm93R2h3cDlvSVM5bERqMmZrOGN1XzVGbnNOdW10ZDVNN3p3ZjJpOS1LWFJ6dTVUTmRlbjdTTEZROHFFNHNwTmt1THhhSnZZMnJmOHFMeGdkVUpMazJBYndhUW1UQVpOREtWanp4XzdsTWpHNWVlZG5zbWJNZ0FvVXB2OEVmTlZrbFBjZz09
I've seen, you have a documentary about sex bots and they're totally uncanny valley. What's available right now are dolls and not bots so it's different like you said, I don't think that we get uncanny valley with dolls. Ironically on that documentary they solve the tactile problem by embracing sex dolls/bots with VR, so that you don't feel the uncanny valley.
r/aiethics
comment
r/AIethics
2016-11-27
Z0FBQUFBQm9IVGJBTWU3NHVxZUVMQXFsRkQxb1JfMGtNRVVCcGhLaEJOb29Wc2kyR09VMDZzaUgxUTh6QXdOWlFHaG9JUFNDdE1HVGNHODE0cmRXTVBDanRHeUFMSTJnaEE9PQ==
Z0FBQUFBQm9IVGJCZXgxakZ0S2NubVpFdUxQal9ndkxQU2tRc1FhU3oxQ0tvbnloOFlNUzU1UU9feVZkQnNZOGtQQ1lfMWRSQ2g1RUhlem9GOVhtV1piZFFrYlF1ZzljWHA4Q3p5QWpfRVpTeUZHVHJUSmY1di1NajJDTTU2dUxVck1TNjh4VkxNYjRZbWxWQWN4V3Ewaks3XzJOQkRtbVM4dzl3cFZ2eDBTT2xrYm8xbzhyODlwbVZEaEhlbUZFNjh5M0pNWGNRUTBoN2dTWWFBVWJ1OXNXbXU5QXpzUURYQT09
I.E. People resort to machines to behave like decent human beings.
r/aiethics
comment
r/AIethics
2016-11-27
Z0FBQUFBQm9IVGJBOTJobThrRXN3NmtnVjhVUS1ISHBmeVJ2NE1rTzB5M21aaFF0Q1RKb3dKekhBQWd0b2FYeWxaUG9OY0htMk5PSzVhZm5fcTlNTVlHelRhOFc5dk1MeXc9PQ==
Z0FBQUFBQm9IVGJCSlRSSG1VT041a0R5OHVNX1UweEVBODZOUVFCeGJYN1lRaUR4TWpTbGlyV2diR0ZOS2VlZXRDeEV3QXZkMzdMbUpYRFBxTzd0T2VuRVc4dnlMaEcwZVd0S2tjNm9tbU0xbVJsSzVwVTFBVkpnLUVhY3VFNVoyZnJZMktkZklGNjFxZWpxV0wzZnJrakdGNG5vWHZqdF92T25qRzR1VnRON1l1WXVBMTBfUERPT0lzandXRU5NQWNXSXFnTUFydlA2U2tXSjMyTjlkOVBKM0VkWDh4UDBkdz09
Everyone has a bias, no matter what. Are you saying everyone is a terrible human being?
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBOGJmdTZkWk9GZkdmUlB5dmxfNTN0Q3k4M0dtNGR5RUVpNFpHd015NmFPdl9ZMzdxOUdZY25wVm1iS2F0UW9BUW5Gd3JOSDVFVFJoLTRrRXMtSFd5TVE9PQ==
Z0FBQUFBQm9IVGJCaWlnV2F6UGJ4Nk1MUmFSUFhfcnE0aS1mN1haeVN4WVFEVjJndFlCTlk2dDBJS3RIM3IyOWhDdTJrQ1dnNGNZeW5LNmEwajJrQTNrbkNWTXF1S28yS2lsenhKVnhBN2Uyb1ZVRnZHS2VOVFRvLVpJN1hyYURhWi1ScjhNaHlIRHRKZ3RObVFPcHpUbDRTLVhLWldVcElka3JtSm02bWNUN0RqNWxabldSeEthSTR1UGd6WnZtb0RjaUY0ME5QWVRENmhMQjV4V2VNb2M5NDR0WUZpX09IQT09
>"Technically this person looks more qualified, but [AI] could get more people to the top of the pile," she said. "Most women — and this is a generalization — women tend to look at a job and say: 'I can't do that.' Whereas men — and again, this is a generalization — go: 'I can smash that job two levels up'. AI could look at a candidate through LinkedIn, social, and other data [to help decide who is best-qualified for the job] — although you have to be careful not to make it too much of a machine decision." I wonder how you would design that system, or what exactly it would do to remove prejudice. I'll wait till they implement it.
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBRm5kR3R6VUFxXzgxams5aXl6Q185eEpoZmVZSU5wMC1pZU5nUEdwTVhueHlGZmZMOWFjZExWYS1UUW9TdkNvc3k5d2NRcUl3UXduYW5VSTVfZGphQmc9PQ==
Z0FBQUFBQm9IVGJCemQ5c1ZXb3VrTkpDZkdKZ0d1di1Hanp3Zjl1NjRVVFV6Qm8wOWFjZDZLQS12NWpUUmlxU3ZuMkdtQm54V0FRZ0lLNzROQjZUaDg1N0hVbHFmaXgwcy1NRWMtYjUzMkc5RlhoNF8wS0MzSklEMTl6dVROUTJtN054YmtqbkcxMXdEaW93eFVRNjZGSC0wb0NqazdwX19Ld0RpY1E3MUJTUG9TRVZ0amFBUzlGeGRjZkhZTDVzOEhhWVpDc3QtSjVVVnd5YzNyWFdyTmk3Rk5IcngwSDVXZz09
I am in the midst of writing a research report about workplace diversity and biased hiring practices. If I can get permission to share some of that report's statistics (it's not my report), I will post them here. Then you can decide for yourself.
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBU3FtNzV1VUwtX1RKeGR0UFo5d1RsOTNJb0pLS1RFMjJLR3ZweUtiQkx0TnFKQWsxbHpSdldFcUEzcDgtNlB5eWcxa216TW5rS3daZ3ZwNjgzaHVhcmc9PQ==
Z0FBQUFBQm9IVGJCajQwdlYtQUgzclN0RXZvakRNWTJ3dzBnSG1RRVBGMEFJSWdwRjNhTEoyV0xncGdNQm9Rb3FDV2UzYWs0TnN6dGVfSWcyMHMtdUxVMF9WejJwaTVjS3RDTXRKNjBvZU45eHhkUUFRR2VqOXI5c1J2Q01JVmI4N2wyNXBjZ3ZVcE5lNVYxWlZTTk5sVVZhR093QUxqZldoel9pMGhrbzJxOVJaeVBXZTAyaDlQRklXYzFZbXlFTUotZm5pUlF3ZTgxczdvVzJEeHE5dkoxYWtINlZBOGZuUT09
I'll be waiting
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBLWRqVHZUVzhSSm9SOFhVWnpWNGI0MmJFQm0wZzNQQTR0eWd1MU05M1l2bmstbGZ3WXdVXzNBbEZZcWVmcmRqbXNsTC10QnhNUThBOWNMa1RBXzdJQkE9PQ==
Z0FBQUFBQm9IVGJCa3RfN05fSXJ0MDVKclF2UDFjTjRycU9mVDFOU2E4M0QxZVVUaTlEOGIzUlFlVHFaNldzLUIxUXl3amZhdDZLQ1lSMHEtYzd5SkVodk53Ujg5S1hjWHQ4S0R5bW1YajB6R2thVWFlOUQtVXJhQnB5TTU1bXpnQVBBZlhpOVYtQ1pmakNPWG1WV1pLZmtVRXhYWEhCSTFNUVNqMElyWUtNQU9BUDZmdElRUTA5XzRSTVQwc2pvazVWZ3pGa0drQnRrV21KRlB4S1hMZS1oMHBFQmtvZF9BZz09
Aaand there goes another portion of jobs due to automation.
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBeFhvTENFeWROZmN5Rnlxc0hBN3VZVmVVZ0VQU2UwUE9hQ1k3dE1Xc0UwdUFMN2NGVXVZNEl1N2htU1ZQaUZ5enBfUUFWZTB0N1kxWEdSekhreXlpR2c9PQ==
Z0FBQUFBQm9IVGJCdjc2WjQ5dTNCWG5fQUVuVkNYa2t3aG9BdFpZSGxFM0w2eXpwWlJHdzZwZ0pOLVJpalpWaU8zaDZuREtVbkpxV0dRc1ZNZHE3ZXczZjYxMXlENGU3T25Jc1pBd0tSdTFVSmlONTEwZVNzQXRLLXk1Ull2ZlNfYlNPN2Z1WE5pLTZlUnNtNV9yMWppaFc5QUVVMmZaNkh2Q2Jqd3JUVm03X3B3LXB3RURPSGpqY29qRFNzTWM3amFOYlRzeVlfc0VGVTBTcEZfWUpFM2tzRFBvd1cxMkV5Zz09
VR sex is comparable to real sex, and smells better too. Please refrain from being and/or feeding trolls.
r/aiethics
comment
r/AIethics
2016-11-28
Z0FBQUFBQm9IVGJBekUtVHRjbkhYUUZGeHo1SzVQOUhDWEdSbTJmSjZvLTVJaGc1LTlVa1ZRSXZYMmRMUE5yb1hIcF9RMEFBY3NseFQzR3lHRllUVjdJUmVpWllOeXBMUlE9PQ==
Z0FBQUFBQm9IVGJCVng2RXRMZEl2bVFWZEktT19jeXJtWU84Ym9oa1dBUUltTndmT09qSnZOV0dQb2dzSy10ajhZNVZjMTI1X2tNTVlGZkZwNC1VVVRpc0JhcnRDMFd5MGRxTndqUmVqWHhibGluc2k0UXh4NkJkVzZtNk1iVlhvV0JQMktSbkxER2FGU20xcS15RGhBQTJtMEdXX2ppekFBWXUyQzh2Q3lqT3A1OGhlaElvN0JrNE1obE9rNExYekZYTkJUVXNJUDhYakV1Vkx6cjJVRXdfTkR5RWhRWlJJQT09
I don't know if you're aware, but the inventor of Lisp--John McCarthy--is also the guy who coined the term "artificial intelligence" and organized the Dartmouth meetings that sort of kicked off the field. LISP was *the* AI language until the 1980s or so. Reasons include history, but also the fact that you could do relatively rapid prototyping, it was great at symbol manipulation (which was nice when symbolic AI king), it's incredibly powerful/flexible, and code=data=code. Today a lot of LISP's features are incorporated into other languages that are perhaps easier to use (especially with larger teams). My impression is that Lisp itself is not used very much anymore, but it's still a fine language (I think Clojure is used perhaps a little bit more). Most people seem to prefer working with procedural/object-oriented languages though, which has a self-reinforcing effect, because it leads to more tools, libraries and potential workers. I think LISP's strongest advantage probably lies in how homogeneous it is and how it treats code like data and vice versa. This should be pretty nice if you want the AI to program itself. So you could use it in genetic programming for instance. It's also still absolutely fine for many other things, and it's very educational to learn a new programming paradigm, but if you want to use other people's work (either in the form of tooling, tutorials or labor) you're probably better off with using Python and/or C++. If you decide to use Lisp, I recommend checking out Peter Norvig's Paradigms of AI Programming.
r/aiethics
comment
r/AIethics
2016-11-30
Z0FBQUFBQm9IVGJBamhfdUo5VnFfVXplbVQwejhYTkpmMFVaLVM5TnNLN253N0ZNN3NFdVZPdEhEQ2o2OWM4QnYzVFk2ajg0cnBJLUdQOWRDTVhuRGNteHI0MmxKUUc3aFE9PQ==
Z0FBQUFBQm9IVGJCZVdHRVk2dzdCLThZdVFSb01MT0toeTVrM0NCQWJ3aDZBM00xYUdYTFF0VG9NVGViTDZVYzhFZnNyd2gzYnVnM3JNVnpfcHpKNnRaSW9rX1B3ZEhCRXhCeDliX2ctTGJlaEdWQXNZWFJFMlVfeXFCOUN0akd1eWlRQ2FyRjhiSndhb2pYc3ZGMFdkaE9jdThybUdLRE83N0ZIa2h6SzlLOTBpQmlYZ1NYOGlMOEhyZzctU09PVC1XWWQ5Y29DZnFGaGxDOUJ3MmU0ai10OUdBN0NvN2Nadz09
Honestly I'm not sure if this adds anything to the discussion, but the smell of sex is a high point for some.
r/aiethics
comment
r/AIethics
2016-11-30
Z0FBQUFBQm9IVGJBNEFwMF9sSlZ4RDExTXV0UmQ2aTRpT1pXQVZyRWU4MUZPSFVFSDRGbDhBSlZOWEZiN2JYejlyNWsxOEIxcURMdWI4Q0xVUVRiRnRoRHZjak55WTljM3c9PQ==
Z0FBQUFBQm9IVGJCS19yendFYjRPTl9jaFJtMFhQaVN3X3MzNG5jVTh0UFR0dm9VZHRzclM3dmNrX2pkc3ZOaDhGWHdGazMyVGtuZFVpMklGTHhiLXp6U0F0TldrM3Rmdy1XOThmcVg0bnJjLVdDa3FuWDAxcFNJalhSR2R6TUhJal9wYkhFbzI0U1FMNWozSW9GTl9pNVJlRmQycHRBaDB0Q0RTdUcyN2xKWGwwTjk2NkFCaFpyVFg3TDJscUNQS3JOZmxTemhnU01XR3dOaG5KUHBJWE5rVk5jei1WWURXQT09
It may have hurt when Silicon Valley smarts found out they are not the smartest in the class, that they have been outplayed by Trump's media knowledge and that they are not powerful at all. I guess more jobs were taken out with the advent of PC then AI. So, although revolutionary, people will move on and find new jobs. What a generation of scared little kids.
r/aiethics
comment
r/AIethics
2016-12-04
Z0FBQUFBQm9IVGJBSnFwQUJ4UkxYNURrai10UzVvdXhBZXV4ejZjRE5RRi1WV05OUTJuWUU1SE8tNDJ5REZtS2ZhTG5jQ3JTQWhTTHg4eDdGQzF1dGptdmZpZHZLVHRabkE9PQ==
Z0FBQUFBQm9IVGJCU1BtRWJNU1VLc1k1blpRS1VsRENnaEN1WU1ybnk2OUdQVUQ3cS1nNVBDMXp5WHJoUm52dU53QklQRFhNOWlCQUhodmxoemNURmNZaENFLUFzY0ZFbjlVS2ZIaVZuR3BxM0hCZE5IdlZMRllZdUlNQ2dpem1BMTRwdS1XNV9yVzFJZ045YmQ1V1Z6Qi1FSFV4dzZKclZMa1Rad2cyUHhiX1ZmOG9ibFJNOHFMT0ZHTmtLdTFuZzFITXQ4bFVDRlN5VGQ3NjRTSUlSZy1IbGgySzN3VWoydz09
Millennial here who sees plenty of good reason to be a scared little kid. I imagine my kid is going to be the biggest pansy of all while they're salvaging for clean water while fighting our robotic overlords......
r/aiethics
comment
r/AIethics
2016-12-04
Z0FBQUFBQm9IVGJBQzlfcWdKLTV5OXVGVU9oQXlJMGVBRVJXVVZSSkdiU1dGU2g3MGJTQUt0a1JjMkFnUmhnd3JScmV2MUxOMWlHQzBNTUttcjBpeVlQYTRBLUVOSkUyc0E9PQ==
Z0FBQUFBQm9IVGJCMjBoZ0Y1Vk5ONy1wZC1qOWlLaG84ZzlWN2pkeTBRbnlzNnRmd25WLWFRcjdCeXJMc05TZENGclYtWm5IWk41OWR2LW93dlgyaXR0RWNDOUNadFRBNGM4dmNVdWtxNWdjZ3dyUnBNX0NvU2RsSFpEc2VjaWgxc0xvSEtIOEVJSmxiYmFJLVpuXzZaSWZkVzVQUjB6SXhTaFlNaGFJV1VHZjlEVU9FcTgwTnFFZV9BV3lvZmMyMjZWQ2NsaWw5TzJ5V3owS3Y5R1QwSkhTSDNqcV9BQnNRUT09
Hi, this isn't the right subreddit for this kind of discussion, that's what r/controlproblem is for.
r/aiethics
comment
r/AIethics
2016-12-10
Z0FBQUFBQm9IVGJBQnFJZEZNMlV6QWlPZVhOQWJQZDkwNkxkN1ZtN0xjQUxBdFNRd3dTTG5QOHJncjdFc3dhS2xUemlpWDZxSHBBVU5SSEpDUU5tdkt6ZVVOckZERlZCSmc9PQ==
Z0FBQUFBQm9IVGJCNldsc2V0bHJGWGlHTGdDT2lJZTY2VmNPSm9RU21fTzl0YmlQS3FKaUNfLWxCSTcydGVtbEVDRFNEeDlxWC0tM1RMeUdpREVpd3BEQjhZNnRzLUZ6eDVZVUY0WUw5UDQ0dndubmo1UUhRdUQwOERSOUdBTVRoZVhWYS1sSldkRlpyRzVnRTYyUlBIZWttTEN6TDdSQU9zdlFIYk82NzZmSUk0NndyeWZzRU9TdEh6V0k4ZHA0RVpVdDVBQjZyeW04
This is neat. But it's pretty strange that their notion of 'human benefit' as a general principle is essentially constituted by refraining from violating human rights. Their response to value differences is to identify contexts and communities where different value systems are appropriate. That leaves a lot of flexibility and open questions, but to me it seems like the most likely equilibrium between moral and economic/political concerns. I think I will give the document more attention over the winter and then send them a good response.
r/aiethics
comment
r/AIethics
2016-12-14
Z0FBQUFBQm9IVGJBcTRMaDJ1Wmg4YUJJWTh4SWNab2pPZTVUbE8ydFkwbTFuQ2s3VjZWSWpSMFVUU09PRkhJSi14b3FTNnFuT0VoTzFaeU9LZjVOaTRnX0ItNlZqNkhGeEE9PQ==
Z0FBQUFBQm9IVGJCTFUyWUJfMTdqNjhNdEstbDFaVVk2U1liS1g0YmRIUjNDemFiTURUaEgwQkZZRU9NemZJTGxnbjNKbUFZbGVsQnJlN2RHRzFGVmJqc1pKbmVHV3ZTUFpraUlockNxMXAtX1M4U2JvWTM4Y0ZtdWJBeFUtOF9hU2k2MVhaLVZlWmR0RGlMTzBxQTFySGFRN1M2aExSZ3NFZWtlY0xYaFVldTYxTVE2TTEyNFU3WC14Q2NUdU5STU5vYVZvaWNiVTB5b1BRck0wbm05dnl6LTBzUGNFM2NOUT09
I don't know if the technical systems this is about are artificial intelligence in any sense of the term, but this would seem to set a significant legal precedent.
r/aiethics
comment
r/AIethics
2016-12-16
Z0FBQUFBQm9IVGJBR2F6Z3V1OG0yYjRvT1RNYmR2a0Y0dEZIQ0RzZl8xbWtJV0hqa254MzB5SGp3b0Y3OFU0cnVKZU9WakZ6TVlLTXB5ZG9ZZU1BbGFmdFJWU2NhN2JXNXc9PQ==
Z0FBQUFBQm9IVGJCbHZUeGhIb0Z4MmZsdXhfQnhSNUFYS2dYOWZPRGhwdnU1LTRNcVptaHFrRTRzcjlfaWRWbzhMZjNCN0RFd09teGlJODgzNS1qQm5HNFNfay1DWV8wUHU3SFlBbldUVzJQaUhNX2ktemptb052YnBsSER4SzUxSElqTEN3U29TSEVtdmxCaXhKYTljdGhNOTQ1Z0ZnUnRJZkNPdU9lYVlVTVRfS2Y5RFVPeHZVQVNYRFY1VWlTLUY0NFdWd2lOcUdjY0NqTTFjeG9tb243UEdYUDlNNVpaUT09
Why don't they just change the system like they do for airlines. Put a person's name on the ticket, show ID upon entry. Impossible to sell on. It would only make life easier, because if you suddenly find yourself unable to go, simply void the tickets online so they can automatically refund the ticket (if it's within a reasonable time) and send a notification to the next person on a waiting list that a ticket has opened up.
r/aiethics
comment
r/AIethics
2016-12-16
Z0FBQUFBQm9IVGJBblhXTlNuVDZrZjdLMElpZUJSZXpKQUVCQngzRVp5ZlFUU1VxLThjenZnTG5kWE4zMURJWjU2d0JpMXBydVNnVm9RbFg0ZExhUUdpVzFpNkdmOUhEMWc9PQ==
Z0FBQUFBQm9IVGJCUk5yUWlmTGtwTTNvZFhMbkhfR091MDI1Q0xfZi1YU3JiY09NV25BZnNDMzQwUkp3WmpiLWZFbjFXS09jQUNPVkRrQXgxZS0tZXd3RWppcUlqRlRvZHEwSnEtNXpIc21ST1NyNFpLVi1wd04wNFdRZnQ2eWRYNHhnWjdRQlVKZjdYcDVFb3JFVmlfUWVfZmY1Qnpmd3FDY0dnMHltY3g1SS1VN0Nub25KREt1cTJ5bVNWYm1BdUh5U0tKaXN2RjF1ckxnT2dQNUwzZldkc3I0d1l1WklYdz09
A messy feedback draft. 1.0, [1.5](https://goo.gl/dqfK3Q)(Google Docs) I'm not sure if I should continue due to my lack of expertise and uncertainty about whether my suggestions are appropriate. Based on the title and the description of their guideline (Page 1&2) > Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems > The document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human wellbeing. and the description of the initiative program (Page 5) > [*The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems*](https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html) (“The IEEE Global Initiative”) is a program of The Institute of Electrical and Electronics Engineers, Incorporated (“IEEE”), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 400,000 members in more than 160 countries. they're committed to a very anthropocentric approach. Should I assume they are open to suggestions to their core principle or maybe they are not interested in changes in this area? I'm also worried when I link sources in the final version, my badly written public comment might damage the reputation of referenced papers and their authors in some way. [**Submission Guidelines for Ethically Aligned Design, Version 1**](http://standards.ieee.org/develop/indconn/ec/giecaias_guidelines.pdf) > We will be posting all submissions received in a public document available at [The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems](http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html) in April of 2017. > * All submissions must be received by 6 March 2017 at 5P.M. (EST) > * ...When submitting potential issues or Candidate Recommendations, background research or resources supporting comments should also be included. > * Please ensure submissions provide actionable critique ... > * We will post submissions exactly as they are received. ... > * Please do not send attachments. If you'd like to cite other works, please link to them with embedded hyperlinks only. > * Submissions should be no longer than 1-2 email pages in length. > * ... --- **some adjustments based on a slightly different perspective (expanded moral circle)** (Page 15) **General Principles** > The General Principles Committee has articulated high-level ethical concerns applying to all types of AI/AS that: > 1. Embody the highest ideals of human rights. > 2. Prioritize the maximum benefit to humanity and the natural environment. > 3. ... **Prioritize the maximum benefit to sentient beings.** Nature is not a suitable guideline for maximizing the interests of sentient beings. Instead of setting benefit to natural environment as a separate priority, make the judgement of how to change/preserve different natural environments based on it's effect on individual's wellbeing. Even though the complexity involved in such judgments might be overwhelming today, it will become increasing practical with powerful future AI. This will likely result in better quality of life especially for non-human animals than simply conserving what is considered natural at the moment. This principle will also give moral consideration to other types of (future) information processing agents that are sentient. *("the question is not, Can they reason? nor, Can they talk? but, Can they suffer?", An Introduction to the Principles of Morals and Legislation; The relevance of sentience: animal ethics vs speciesist and environmental ethics; Machines with Moral Status, MIRI, The Ethics of Artificial Intelligence; The Importance of the Far Future; Risks of Astronomical Future Suffering; Wild Animal Suffering; gene-drives.com; abolitionist.com)* (Page 102) **Affective Computing** > 4 When systems go across cultures. Addresses respect for cultural nuances of signaling where the artifact must respect the values of the local culture. > Issue: Affective systems should not affect negatively the cultural/socio/religious values of the community where they are inserted. We should deploy affective systems with values that are not different from those of the society where they are inserted. **Cultural/socio/religious values should be treated depending on the short term and long term effects on sentient beings and not blindly appealed to in their current form.** (Similar to treatment of natural environment, perhaps subdivisions of environments in a broader sense?) > 5 When systems have their own “feelings.” Addresses robot emotions, moral agency and patiency, and robot suffering. > Issue: Deliberately constructed emotions are designed to create empathy between humans and artifacts, which may be useful or even essential for human-AI collaboration. However, this could lead humans to falsely identify with the AI. Potential consequences are over-bonding, guilt, and above all: misplaced trust. Add issue: **We might falsely dismiss the sentience of AI systems.** (partially addressed in the first part of issue?) When dealing with sentience in AI, we should at the very least treat it as a low probability, extremely high impact issue. And new technology such as biocomputers and quantum computers can be used in conjunction with traditional silicon based computer within the same system to power future AI, which might also incorporate brain emulation techniques. So even for people who are skeptical about creating sentient AI with current hardware and software structure, the risk of this extremely high impact issue might quickly change from low to unknown. *(When the Turing Test is not enough: Towards a functionalist determination of consciousness and the advent of an authentic machine ethics; Do Artificial Reinforcement-Learning Agents Matter Morally; PETRL; Ethics of brain emulations; Dr. Anders Sandberg — Making Minds Morally: the Research Ethics of Brain Emulation)* There's the risk of lumping too many types of AI systems together and treating them the same way, presumably based on a single (series) of experiences with 1 type or a limited range of familiar systems. AI displaying similar behaviors, sharing similar design principles could differentiate vastly in terms of level of sentience. **Careless development could lead to unprecedented level of suffering.** If AI become sentient, they're very likely to have a greater capacity to suffer. Their subjective time might run faster, their positive and negative experiences might be amplified beyond what is possible within traditional biological brains, they might lack consequential or voluntary critical failures similar to certain types of mental break down and nerve damage, death or suicide to avoid perpetual extreme suffering. Similar issues will affect parts of transhumanist community that venture into anti-ageing and extensive brain augmentation, both of which are likely to become intertwined with the development of AI systems. *(subjective rate of time, MIRI, The Ethics of Artificial Intelligence; Would it be evil to build a functional brain inside a computer?; Louie Helm comment, 10 Horrifying Technologies That Should Never Be Allowed To Exist)* There's also the possibility that forcing human like/desired characteristics and sensors into AI systems could lead to negative experiences or suppression of functions beneficial/vital to AI but unfamiliar to biological entities like us, even without deliberately implementing suffering, due to fundamental structural differences and the environments they reside in. How can we guarantee every problem is taken into consideration with reliable countermeasures for all AI systems in all situations. A single slip through, a single case of "digital hell" has the potential to be worse than anything that has happened in known history. Further down the line these problems can even be multiplied with space colonization and large scale simulations. The stake is too high. *(Artificial sentience and risks of astronomical suffering, Altruists Should Prioritize Artificial Intelligence; Even Human-Controlled, Intrinsically Valued Simulations May Contain Significant Suffering)* **Moral dilemmas concerning the treatment of potentially sentient AI are intriguing subjects in popular TV shows and movies. But if we recreate any of those situations in reality, it would be a moral catastrophe.** It's also important to keep in mind even though most fictions and discussions tend focus on humanoid robots, bodiless AI could be a much more prominent victim of abuse and they're much more likely to be excluded from our moral concern. *(1st talk Nick Bostrom mind crime, NYU, Ethics of Artificial Intelligence Opening; The Importance of the Far Future; Fairytales Of Slavery: Societal Distinctions, Technoshamanism, and Nonhuman Personhood)* We should actively avoid developing/implementing the capacity to suffer until we can be certain such experience is strictly contained with safe guards protecting the potentially sentient agent from any form of extreme suffering, or even necessary. "...the excluded middle policy states that we should only create artificial intelligences whose status is completely clear: They should be either low-order machines with no approximation of sentience, or high-order beings that we recognize as deserving of moral consideration. Anything in the ambiguous middle ground should be avoided to cause suffering." *(When Does an Artificial Intelligence Become a Person?)* Similar to the originally proposed issue? *(additional guideline suggestions: A Defense of the Rights of Artificial Intelligences ; 2nd talk, NYU, Ethics of Artificial Intelligence: Moral Status of AI Systems; Ethical Principles in the Creation of Artificial Minds)*
r/aiethics
comment
r/AIethics
2016-12-21
Z0FBQUFBQm9IVGJBZ2NDU3ZZMFI4Y0FoZEF1THA3MGw5MHJUd2M5OGdYcUJ5d29xdjA0UFN6SkItb0tLdEFta1hjSmlySTJMaHJqT25lbDZ4QjdKQWEtNlNJS2xiMG82c1E9PQ==
Z0FBQUFBQm9IVGJCTFJqeW1lWlMwWkt4cVNySmJCQXZad24xRkNLcVNlOUhOTFgyZlBTYWgza05ubVFFRWt3NkZyQnBQR2tKOWVCT3dtZkd3NENfZUJQd2RCSU9HRnhVWnRXU2dKalNnQzNqemxFc1VJUzE5amllOXg5UHZMQk1VNEtWS1dSXzdqWWZ3S3hUcGYxTHQwZ3FKS0s3dS1vcnhaMTRmZGwzZlVKck1DZThLTnZ4SHJLOUlEZlFXdTBHaVY4MjdGM2hrejN4R2p6YmhHajNMdWhkS3FmTllrVXp6UT09
**[This comment has been deleted]** *Sorry, I remove my old comments to help prevent doxxing.*
r/aiethics
comment
r/AIethics
2016-12-23
Z0FBQUFBQm9IVGJBSFJQOWZ0X0FDV0pnUlVWdjRjVkxERjh4VWdQUGVfZmh3c3l3dTlXX3lhZlpEU1RSck55TUU2THo0M2tTelo5MVctdkZVdFAtaUdyWDVGXzlHYWlMMUViRmVHU2Q4b2JlZndCYy1uc2wyczA9
Z0FBQUFBQm9IVGJCNG03aXYyYlRmQ0JxTHo1eU96LWNqUjNnSHJxSkRRYU9Ka2hsU3dkdWR3SGR3VFEwRnhBWlpweFR2RzBWY0tfS1BJd3JJQnZjanpUUkpmaUkxVHd6akFhb1AzSFJuUkZ6OVBqajhWX1VnUUtTczlIUjR5OE1qX3dGYlNmQlJSQTJDTVpjRW8xeXkzWUVMWW9sVUtFNGpYaGRya0l4SkN0WXlMaDBpaTIzZ0xFPQ==
Hmm, I'm wondering if these flairs should really be "tiered" like this. I already qualified for the description of "professional" during undergrad as a teaching assistant with a publication. Then after my master's (but now that you could do it after a bachelor's or even no college degree) I became a full-time professional for three years, after which I went back to grad school to get a PhD. Obviously this didn't downgrade my level of knowledge. Of course, it's typically impossible to make a categorization that suits everybody, but given the fact that you can be a "computer science professional" if you got a job programming websites out of high school, and grad students routinely have 10+ years of experience in the field towards the end, I would avoid trying to place one above the other.
r/aiethics
comment
r/AIethics
2016-12-23
Z0FBQUFBQm9IVGJBcklNZlF0U0ZhV2U2RHk4NWhVYWtWZkdqNmVfc1JWNDJRcjJ1X3AzTzJ4cnlrMzhZV2ZXd1ZFNWlNYXl1clNIUVZRUGNMLWI1QjdDMGI5UU1ER0Rpamc9PQ==
Z0FBQUFBQm9IVGJCd0gwZnRMWmFnWnFleDNJcGkwQ2k2T0pqdmNCXy1yOFVsRXJqX3VydEEwVW9hcmZ5V2U1aGlVQmR5cWh4ci1ISXpjeVF3d1pHQ0UtRm1OOXZfcXczSllEMndocVhWYUY4WkF3cE5mZmc5RllUNGUzUm1odVNVODZ1cXV0YXVYWFNaTC1JYkRaSmZPaGd3bFVNQlNhRFA4cS1WcmdWeF84ZzlkQkhyb3BmZW9rPQ==
Right, it doesn't have to mean that any kind of flair is better than the other. I think that being a TA doesn't count as professional since it's for academic support rather than income. If you have done multiple things then just pick whichever one is most impressive/most relevant.
r/aiethics
comment
r/AIethics
2016-12-23
Z0FBQUFBQm9IVGJBZkNsTWZHRkV1RUJDalk0bG9pMUFHeEFkUjRrSXh6bjRLRjVGRHZVbkZsZHZZbjgtaEQ4S09QTmtnTXFzQVctdFlYUS15SnJJUlZfZkVuXzBMWFpBcUE9PQ==
Z0FBQUFBQm9IVGJCcDFGWDRHVWxsX0VlN0p4aV9LRmsyYW5kTl8zeWVlaWZYLWpabVVQV1J0WHE1UG1NNXN5Z19VaFBvaHRZVjlaYmlfZy1LTzZUeVVtVWJCNjdIYUJBRVl4dTNMbkp6c0kwMUNtXzRpNEtyVzNPdm9aaGgtazI1c043ZmVWTllxT3NtbThtaVpQMjdXM2V3U2M4UU9NQUF3VVEtYnJMUXZYeVRibXIwYXhTOFhvPQ==
Light-medium-dark seems to imply some progression. TAs are (sometimes) instructors and almost definitely "derive part of their income through computer science or philosophy work". PhD candidates often get a full(ish) salary/stipend and depending on the country may be considered either students or employees. But I suppose this is just nitpicking the definitions a bit, since it seems fairly obvious how they should be classified.
r/aiethics
comment
r/AIethics
2016-12-23
Z0FBQUFBQm9IVGJBYUhJSmdGbkhwWkRvQXd5RTlnQTZmcVR2XzNkT2lTVnpDSXA3UVAxdEdSQU5rd1JQY0gybUFTMDFxOG0xam9Ub2IwZTFqRURLdjYwTHhEanRPeGZFM3c9PQ==
Z0FBQUFBQm9IVGJCYjBkSkc1VEdzRHN0Sjk3WFB0YVdJWFFmRUtVUW9LZlh1T3NsaHI5TVdLRjNXTDNSc09hbmJ0eGlaX3NBZGhTRl9kNVZxUkxjNU50eVBwTVVnYXBmT2xjRmY1em54bm9tX3AyUXIwQWM2a3owUHphdl9valVpN1psTWtqc092Rk94V2FQMU1fRkZ6ckJpMDJ6SHFVVmk3Q3lXVk15d0F5MFlZbUdRUkNVNGJFPQ==
>Light-medium-dark seems to imply some progression. Well in general people go through education earlier in life then professional work. So on average there is a progression. But the reason it is different shades of the same color is to keep it simple in accordance with different fields. >TAs are (sometimes) instructors and almost definitely "derive part of their income through computer science or philosophy work". PhD candidates often get a full(ish) salary/stipend and depending on the country may be considered either students or employees. It is not true income but academic stipends, and their primary qualification and notability is that they study at a graduate level.
r/aiethics
comment
r/AIethics
2016-12-23
Z0FBQUFBQm9IVGJBMWh5MTR4b1BUVjJERnJjcEFJQjBpbnBNTmlWLXg3ME9kZDgxZ2pYUzZKR3NPZXVaT3Z5cHJkNVVrM1ZoQU1nNWlKM1V5djZ4NlplSEZmMXJfNFNtV1E9PQ==
Z0FBQUFBQm9IVGJCc0tmNFVoV0ZQV0RYS2hpOWJjeVBieHlKcmZZbEFmQVk4ejlPVWYyakdVNXBuNmtQbGpMU3ZUTThJalhFdzdOSHZTc0g3bWswTXg5cE9sUFQ1VFR3MlNQYU53Mm5XY1FYRTAtNnd1LTU5U3h6NnVCY3J3R3FTd2UxWGxyTnZiWTh2blV3S0l5NVNXYmlkczQzNUMxTXAyQzdvT0x4aE5lakt0NXRVQU85SWRVPQ==
Imagining a Strong AI watching this video and drawing meaning from it. Of course it has also read the history of all language including this comment. It cross-references our /u/ histories, the resumes of the panel, and calculates and quantifies the value of our opinions. In this way, I feel comforted that an all-knowing intelligence whether technically conscious or not, will have "a heart" because it will be drawing from humanity's, well-rounded mountain of recorded history, stories, and culture.
r/aiethics
comment
r/AIethics
2016-12-24
Z0FBQUFBQm9IVGJBZk04dk1UZ3NKYWVISEdOQS1EaU40VHdtTEZMZzFja01WRGx1VVJ1Z1RnTXJvME1lUUZNWFdyR245a0NwWTBVcEh6WEtSc0trWV9hcEdIX2pMUFlSNXc9PQ==
Z0FBQUFBQm9IVGJCRDBoLWxMaEZZMk9vcGpCS0hIb0RDWmVSRy1aSHRLX2htbElrV256UkVuUkRMYVFPWm5nNFRiaHBlc0FRdlN4YmtOTkYzcnhkQzdLREhwd29femxWYjJtUnNZaFkzUl9wLVRfQmV1cGsyMmszVFdQSTMyaTI1X0dVd1UzdU5UODU0N3E0a1ZKQW9WbzU1clBCYko3Ri03Mi1KTDl2VDZhcUtJbGdnUDZVSzVTRnRGOVJHUlk1QV9wZEkwQ0dfTTdLaGNGR1dYdXUwSnB5Q295RjZlcVNiQT09
2 Psychologists, 2 Computer Scientists & a Philospher walk into a bar... only AI gets out alive.
r/aiethics
comment
r/AIethics
2016-12-24
Z0FBQUFBQm9IVGJBQ2R6azMxN2Mxb1RKS3MyeGplMklreEVINFdiMVV5c3FZWTRkT1BvUHJuYW9laklUTTEtZm9mUXQxTFUtRmlXNVFCd3BfcG1NTFE2UzlCdEVjUnpkeWc9PQ==
Z0FBQUFBQm9IVGJCQmt2bFRFTzNrU2tCUk1XbnJ5ZUxhWVJMQVM5NndrYUxtWVl1UWJzaWJpUWowZE5NS3JmTVU1SlVXS1JoNWtkeWd6NDdoMjNBWGN4Tk9mT0RqRzMtMktLU2VHeE53eTJPSGh2eXlRWXBvWUUtSXBtcGh4MlhaUlFuV055LThlclhlaXAtOVkxbFQ5alNRTXZueld5MXNXRGtKb1E4TDM0U2ZnbVRFTGZld0tsbVNQNFVlR2VDY0d2bVkyZmtNaHRZa082QklvVGpRc25GNE1LUDZudnhWdz09
Why would the system necessarily care about any of this, just because it read it?
r/aiethics
comment
r/AIethics
2016-12-24
Z0FBQUFBQm9IVGJBcE5mSnMyVlQzWW4zUDB4dDBaZURwck5yazFQXzJIMEpseWdPUjhXZE5rVHpUX3EwSVVrZTd2ckpMTThOX2RkYXZCajBNQmxLUmhVY1V4Z1laQzA4REE9PQ==
Z0FBQUFBQm9IVGJCT05tZlpJX1VmN2dQZGNDTEp6WXc3WGdKZldOUTNFcUIzam50NUpmQmJ6R2U1WXVkUXBkQk8wNi1UTjNXRzdIUGE2OEJhUl9WWmljbDUzR29NUjN3RWVyenB1cWowcDg4aGphN0ZhZ0JsdzhNc2M5R2NTT1A4a0h1V0F1SnNwclMxXzZlQlFJWFZsTmI1ZlQzY2dCVkFkSWNLQzV4SFFtZ2J6REFEQlJTRkVsQmVxSWlVd3BMLUc0blphVExQUzBEdm9KM3pfVnh5dno2S0NrYWU1cGN2QT09
Was that a rhetorical question?
r/aiethics
comment
r/AIethics
2016-12-25
Z0FBQUFBQm9IVGJBWHdoMVdjN2l2MGc5ZUQzeWFnZTdsbllfWG5RZEtHUS1PYWtrQU5SRVoydmpYRzBvM2hYU3hIYlNTMHNfbWptd3RGeHUtZHNBTHJ5TVo4TWdxQnNDTVE9PQ==
Z0FBQUFBQm9IVGJCYUozZHg0MVFlc1BSckN4dFJ1enVxQmMxc21qZlViLUNaMVg4OWJ6Zy1uV0lrV0owOTdmNEdmX2UtZXk1YThlMkVWRjJiSG1LeUxpeTZtdW4wZXg3dXdyOC13c0dPa3poZktGbnVpa3dOUjd4cFhhTjhnbTRSejBXbzN3bVpuNW5LYmtEYVdlSlpKSlItX1p1YlhmeGQ3YUZEaXlMMGlyNXdnNC1JWkF1azhxY2xoVEEtSThDN3l1RWVpeWZpVWtwM3lzYW9rdEJFRm9UbWFRVlN2V3Bkdz09
Not exactly, but I just wanted you to think about it. Just because a machine knows what people value doesn't mean it will value what we value. This is called the 'orthogonality thesis.' There's debate over exactly how true the idea is.
r/aiethics
comment
r/AIethics
2016-12-25
Z0FBQUFBQm9IVGJBQWRhaG01bjV1OS1IVjduYTZHLXc0WjM1VVc4UWRrZEd1RUZYNmlPclN5bmZFY1pEYlpZcnAxVmJQdFNOZ3d6VmlGcWlobnJaMzhwN05Icy15OTBXQnc9PQ==
Z0FBQUFBQm9IVGJCUjhSV0YzX0JESklOZzBVYnZzMFFIc05odmxiM2x3YUFIU0g2NFBoMWM1YzVyaDE0UzdoRkg3Y3R2VHNHeWxXZkM0RVgwYVVreURoTkkwRTFSQ2doVTZ6MUg4UTM1T1h4TkU0QmVyNVV4ZF9uRWZyTFRCcktEQzdRVGp5ODJCUS1TRjJaS0pJdko2dkhVaDhBQmdtVFRsR1A1eTFVRFpwNzlKalJadmlERXBkVGs2OEZEbG95YmdGM0hOWWRLdHBiNjBLeXhteHM2X1oyZ1FKd0dQVXlsQT09
But today's regular intelligence *does*.
r/aiethics
comment
r/AIethics
2017-01-05
Z0FBQUFBQm9IVGJBeFBlc25mN0ZCQzRqNjRkZ3JNRnplcWdWOU5oazNUMExyNDFDVTlFWHB6ZU5Fckg1ejBGR0x0SFNFR3l6VWdzcnJpTE5lUFBrb1FfVnJSWE5OQzI4R3c9PQ==
Z0FBQUFBQm9IVGJCd2NHUzNXUUdIMlFxNHNaNkxFR1htNGxBX2JzTmhHM21WMWdSMzl4SGtHSlZUQVZZMmVPY2hKRkxSMWhUOEY1UXZaRnBGWnRNOWZVSFBKV3ZWR0VwWVBpaEhMSUpsRTFtcVYyZ3dVYWc2Q3laZ2NVNHJGNUxpM2g4N2p5ckVRWGFXLTJpcnI4dlNKMC1qb29IS3lfcjFGcEZ2MTM0MTV5WERfTXU0SFFhdEpvSThLUE9rSWVJcW96dUZ5enVyX3lPOU5EUzJ2ei1fMDI2ZHI5Z2k1TXY5QT09
The author seems to imply that basic income is only justified if almost all human labor is replaced by AI. Working is still possible (and likely encouraged), if basic income is implemented.
r/aiethics
comment
r/AIethics
2017-01-06
Z0FBQUFBQm9IVGJBdVRmUlJxY1E1VU5EaFRmWG52N2w3ckNOWUlMbkJJeG1xSklwVktPM0Z5Z242ejE2VVktYm83dlA1NGhVQ1Etcm85SU8tdXNwU0hLbUNReFF0Q0NMS0lEZE4zWEhwemVYQ1dYeWtOa2dOemc9
Z0FBQUFBQm9IVGJCbWZlU2VlWUloa0pseXFoTkVvbmlYaXRKTTVfOFdlWjViYnl0bzZITGg2QkpoMHFWUC1MV1o3S2dxcUE4NFFNMUdLdkNyR3prbV9lbmxPUUIwbjViWFpkOGFjV2ZHTDluYllTNUI2dHdQMDVrOEp2UXlUWlJXVFNUcHhlNHBOVEZSQjlRWVB2bkdMUjVkWkZKZW1CTHY2VjBZcVY0SlRPTlk1Y2tsUXkzb0tTc2tLeU1rN0FWY2h3NTB6OFJSSFlVVDBLYWtFYi00WldDQ2dqaTF0aEZlZz09
It's a question of degree. If structural employment is small then targeted welfare programs are going to be satisfactory. I don't think Conitzer is waiting for the majority of human labor to be replaced. Structural employment could be, say, 20% and any economist would agree that we'd have quite a few problems. But structural employment right now is very small, maybe 1% if I had to take a wild guess.
r/aiethics
comment
r/AIethics
2017-01-06
Z0FBQUFBQm9IVGJBWE9hb0JNQWxVV0dacnQzN2p3SDQxVmpLUnUwM21vODUwN0I3dERGN1ZKOGhrazBMMmJDNVlkMEFKZ3NPMmtvb1FIclNHQjYxVGpHSllaMHp4YkppUlE9PQ==
Z0FBQUFBQm9IVGJCb3dQMEpjZDNQYk10NHE3TGNENXF0SERrT3VFUUlNMGI3SUUyZXhkb3RqQWM2ZTZIdl93MEdpaEVDRml3REtCS25qRUY0Vk03cU5tWlFjRGFGb21wVjB4cXNRcktpdkk0NmRrZVVORlVzcTNZQ1dleEpNOTZEMDlzUlNjTzRqd2ZnWVR3dkxraEZVMzhYOVNzOGU4b21fR3ZiNGJDckQ4MDJWSi1SSzRiTEswSldmOWVuTjNleG1lamVhbzBtYnE0TWNNMTFla2JxVmV2SmtONm5qcjAxdz09
I think if we invent the Ai, then they are gonna take over us... Pretty sure.
r/aiethics
comment
r/AIethics
2017-01-13
Z0FBQUFBQm9IVGJBNWtudlM1eURlejNLQXQ1cnZqMGR4Z1R3ODdXdlptVGJRMEx4ME1wNFYxVHlMRFZTeFA2bXNBUGY2R1RQQlE5MV9TWmZSMEN6ZENCQVFrbEZRTnZBSWc9PQ==
Z0FBQUFBQm9IVGJCV2JIemQxb2NneElhajB6RG9HT1ZnbUFLSU1jSDc4dWFienV5RHFVN1FOM0Z4NVRGRkY3WDlPYTA5UkFrUVZmQnFTbERjclpKYkIwZWF0OHM5Z1Q1eExEeWFVZHRpZlNrVU9kQVJnNHIyRTdnenA2eXVwZm1uUzNPVzFpNXJWNkhINkN4MUJXNVN3VUlhWVJQSVdWbUpCYkxmOENJOUwxenNDUmNoZmduazZ3WlF6ODU5NE5NRXFFU0tLVTJHQ3FldW1OMFRpLWtBcGlERExrbFl0TVJvdz09
Hey, spam filter removed your post for some reason. Feel free to resubmit.
r/aiethics
comment
r/AIethics
2017-01-15
Z0FBQUFBQm9IVGJBRnE5TUF5NjN5MFhjLXVQdlRRUGZhTUY4eGhYaVpJbVBDXzFDTUI3ZzdZTTI2eVdLOGxDaGRLTWVuR05pSkhVeGJ4dGE4R0dkaTFGMDMzM0Z5QnlVVkE9PQ==
Z0FBQUFBQm9IVGJCM0FOWXhXWFdqYWthTWhqYWdONFdtaHFUN1AyRF8ySk1ZVGhoSk5YakNjLUxCb3hwTm1rZ2dyd0pWMDBJWmRUa0J3SUxnQ21zSmE3UjVVX1EzMURLRlBRUHNzVVNIdDE2V0hHZERZdEthNTNISXllSl9ZMlRZM1pjTmVBbW1qb2hud2dzeGVVTzB2azNrOXE3VnMtUGh3VVNTUEQzRmdfVXNZRlZtSXpDSURmOElEaUd3a0JhZENxSW44TnNaZW1pSGNaMF9tR29GU2dpNlEydU1xUXRCUT09
Hey, spam filter removed your post for some reason. Feel free to resubmit.
r/aiethics
comment
r/AIethics
2017-01-15
Z0FBQUFBQm9IVGJBS2w2cnZQTDFSS2hLRDRLWmpZaGh0U0tDdXRpbmNpODFqdDhVUWpZeXpkYjRUZ1BpclQ1c1M4Tm0zeEhrc1Niakd1X0N3WTFSdlM5c0duQWFyWGd5WXc9PQ==
Z0FBQUFBQm9IVGJCbUtpQUFWWDlYaW5RVXdmQTZ3bVJpc21sR25RWndiRTdLOEg0S1diWWlOcmFVUEV0NTJ0Vl9QYV9CQ3Z3dWFMZkxFUk5FeVI1YWZaT29vSVlSaDRWZG55TjJDaVlKemxCZTVudERjbDlMN2ROVHBnMkZTQzlJbTZ2el9udHgzaVk1RkZhQ2tjbGVwWnZmbzVvMWNLV29KS0oxUE9PVkV2aTlYNEFnYzJZbFVfMTg4eDhSdG1wS2cwZkxfQ2dEZHh3Rm54SXdlcF9OOW0tTzVvai1xdzQ3UT09
Wrong thread buddy
r/aiethics
comment
r/AIethics
2017-01-17
Z0FBQUFBQm9IVGJBdXZobmVkbzI5RkdhWVlud2xqUjJkZkpBWnFfdU1weHl1OVJMQ01EZ0hiUmNocXFwU1pWOHktVENzUEhxTDZqNDNqTHA0VUpIcVVkLUZGWEJuUEZtWnc9PQ==
Z0FBQUFBQm9IVGJCMElBWWxHcXlVNTZ1bUxuOWN6ZlZWUjZaZlVCZ21OaW9XdWVfOTd0UE5BNTdxanZGcWE4OUVVYlVwUVRaVFcyRi0yM0FQdmszYVVoRENIM05CTWMwSkJDeHlKemt6c3gyelUwOWIwUnpzM0wzQ0FSc2sxeWY5azd4TVl2MFVaMUJyc2FpSHBJVVJSQjF6dkt2OEdPZFJrQXlRZ3p5YmZBd1ZaX2ZlOVZsNUJES1kzZjRQbDI1Zzl3VV8yaEhDZmdxS0dtUkZTbjJaT05kNm5lZnlRU1NkZz09
after reading the article - which is just a copy of the BCC UK blog post - it says it can predict with an 80 % of accuracy if a patient will still alive in the next year, which is totally different from the claim: "AI can predict death in heart disorder patients". It will be really nice if the paper can be accessible to read it more closely.
r/aiethics
comment
r/AIethics
2017-01-17
Z0FBQUFBQm9IVGJBZ3YzNm1JVnhzblNOUnZ5eE5WV1Q1MTNkd1drc0l4eFRSYjFINlh4MTZZczVpRHVrRG9fUWhNd1ltUy03UFdoMDF4N1BpaXQ0X2ZnVzhHdmxrUUE2Smc9PQ==
Z0FBQUFBQm9IVGJCLWZnUnVmRXRPZHVRdE54dVBpWkwzR2RhTjFkZlozb0NBSVNTWlhGZ3NsQlh0ZzFEM3dmekI3ZzZsUTd1ZGN5Mmg1RUc3ZFhVUWM3ZDg4ZnpEX3lLRkZ3QlJJdFhsWllpT3FZV0Y5Y1lKQm9SZ1ktWkUzcl96QWFHeGpLOW1jODdySEJLQ0w2LWl3MlNKZjNFMDNhZTNKajg2MzVMbUZ4MHhTdHhTOHlFZ2k4cHFWcGo0enVPb0MxdUkycVBRTnYwcEpjSmxCQ2pvcUxsTVRCU2Y3WUI5dz09
Any kind of planning/optimizing system, like logistics scheduling and route planning. Learning from data, stock trading, predictions, image recognition, handwriting recognition, speech to text. AI players in video games and board games. Too many to list, really. See: https://en.m.wikipedia.org/wiki/Applications_of_artificial_intelligence
r/aiethics
comment
r/AIethics
2017-01-28
Z0FBQUFBQm9IVGJBS2t1RmVidkZpR0s4SUJzeVAxaWIzeWdpTEtSVFV2LTBTNW1LV2dpZkhacUs1WEN0YWpjMWJ1VkZXd2VpdXNQZ25Dam9MOXU4ekdTZUVDVDlyNWVNQkE9PQ==
Z0FBQUFBQm9IVGJCVXBvRVBFeGJjNlItNVZiODJOYmJBczltUTl4Z25TeHZLbHRsc29sQ2FGcWI3VlIwM0tWcUxUYmVuUkZqTlQtalhxeGw3ZUhFLUNfbXJDTVVfMFJxSUNvWFZOSzByZzlHMzBUX0lHejhwOG5qUEFXcjRyM3NDSEZGTzlWSGs4RnZPN3VoSkNyNUMzQVZ4WG1ENXFBTHpCSVcwTjUyUGpCdVJoTVRSeDNRVHZBYnJFWnZSWWJ6ZHVaSTRHenU4cFZ4QTdOZ3FiQUc4b29QWGdtaXotanJ6Zz09
Non-Mobile link: https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence *** ^HelperBot ^v1.1 ^/r/HelperBot_ ^I ^am ^a ^bot. ^Please ^message ^/u/swim1929 ^with ^any ^feedback ^and/or ^hate. ^Counter: ^24265
r/aiethics
comment
r/AIethics
2017-01-28
Z0FBQUFBQm9IVGJBVFJELVhJbzB5NGlZUkY4aDl2eFlkZEE3MUZVc2pKLUUwZFNCbVQ2SjJrRVVjRFFkLWRZOGdDcTdHOHlGZlJrUzQ1T1VyWmFjTVY3SVpoZHZ0ZUh6X3c9PQ==
Z0FBQUFBQm9IVGJCOGhlUERLcmlvakVHZWxSMkhJYXJNVFNPaUpZMmZ3cnhoZzdkX2VBUE1TaG12SWtJOF9aZENWQlZKMUNwSzJSS092VGxmUzBKUmpfSW5GLWpnR1VXUzhXUUZQM2xxdnNXRVNoRXFuSFdoRzcxUW1WLWU2QWMwQmlrb2RncmlmbHdiYWNZLXFJT2lSdVc5dTM5d2NSdUF4TEdUdUZkQnZIQmtwQjZIX2xyRGhacnBrMmdqM2FlUWlQa3BkdTFFOEM4d2JQNmtQVUtnSXNSejFzbzMtVmc0UT09
Is this a joke? AAAI 2016 seems to be of such low quality. Note the system in the paper always defaults to asking a human when it needs to make an ethical decision! I feel like I wasted 30 minutes of my time reading this.
r/aiethics
comment
r/AIethics
2017-01-29
Z0FBQUFBQm9IVGJBaVZKd2gxRWg3cHk4VXUwT0d3T1pDRi01Z1g0enA0bENpU3hzS0MtcTdaZVlyVG0xWWd4ZUtQNFI5dE5vMmdtTjJsNjZQUlBCZmZiM3h0SWJQYjFmOGc9PQ==
Z0FBQUFBQm9IVGJCeGdjV2NlV3lEM1hDclpKWWpqZ1dIYUROajhFNUJFRTVuVEliNUlqUHJVM3U0c2llMWwzR2VYS09Jbm85UGdFRkUtR2VfOUZTcE5FLVdTWXZCU1lPclFXcnpvRndEbkF4c1JkVlVBRHBYTmxjR2RibFp2THVkQ3VqSGRPZzJwZWc0MXVaNlJoVTBjT2Y4UGl6Z0VicFRMWEpkTnQxU1gyUE5TaU9LR09IQ2MtUERuMDE0YmU0dTJQeHNacVFLOWdRNnlqSDhzNmhrWnZfMlJTQ2drMXdoQT09
Thanks for this! I don't agree with the approach of focusing on consequentialism, deontological ethics, and virtue ethics. More philosophers accept "other" than any one of those theories. I think students should also be encouraged to: * apply philosophical thinking to particular AI issues, in a particularist or analytical manner * consider alternative modes of ethics, including non-Western approaches * define elements of a new 'robot ethic'
r/aiethics
comment
r/AIethics
2017-01-29
Z0FBQUFBQm9IVGJBajBUeldhZzdfS3FlVnp0X0JtdE0zaHR2c01EQV9GWVd1dXdyYVlnXzZOQUc2ZEdMeXNDWFhIdkRRanJGTHZXWlRHbG9VOFJHTTVZeE9OQmRFUmVEUlE9PQ==
Z0FBQUFBQm9IVGJCaUpUWFd4YktYMGg3Y3owTFBXVEVYNjR0bW9IalRfeUNPQnFyNUxjUmRVSmRJV0YtSXNZS29iVzBGRVdCR211UEcteHEtZ3RGc2hQcjBIejVkVmo0YURfT3RNdUdvZ3lzUWpGOVlxaW5iTzZZUDlSd3FUc0kwS2dqNTZfZVRETksxYTJyM084UmRfVkdIMXg3bjZhV050SzBLV3BiYXE5R25QR3c0bDNscUw3NmFqZkUtOTY3Z3JEeXY5MzVTRW9rZFpEbFJkdWU3YnE4cG55eGJMYXd5dz09
"Robot ethic" is consequentialism.`
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBbGZsUHhaSHZPeW04aEVLa21yam5DZG16eGE2Z0RxOGZiWEttSUlLUFJTbXRmNW9ndDF5czVsa0ZDRUtKZVYxbmpOSVJnb2I5T2pZakVVM2tZbGRDWmc9PQ==
Z0FBQUFBQm9IVGJCSngtZ0VnUkFGVUMtV1gxeWwyclEwZHIxcW5lTXd5YkVabzA1a3ZPd2dVRGwwX3RGYTBHOWg0QU9iemktb1R1b0VKNDJNWXdMekZvUktJZ1lRMWJ1Qm5NNmZ0a3ltZ2dIak5rUmpyX19YSG5SWUw2aWhMV2pjbGZ1U2NVQnBhZEpfNVROZ1Y2Q09CWS1ON0NFLWx1aE9FTHNHcC1wOGpCLU40ajhoWUxlTHFCM1FybHI2aFhXTzd2dTdZNWE4a3RhdUI1QkhjZ2lwRThyeTRxXzNKTWRDQT09
How so? Why?
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBdDZZRkRLaE15Q0k0cEptd2ItMy14dWJYZUpVVTlLeExwbjZrWnRBUndyaFR6SU9YMGxvU01oeHF5XzZlc2xUc21yeXE5Rk1rUUZ3eDB6MFhUc3J0RUE9PQ==
Z0FBQUFBQm9IVGJCYmxTUDcyUGlDQm0xWmxrQlR0eHBPNjI3dUdTY0FDUFJORDB2VUxlbGxOVnBFeFp3OXlFODdwVHltUmR2SXhDN2NxRzJOUS1SS2JDLUZSOWtjLUczREFFVzBfeFpuZnMwUXFmTktnODJTRE5YS3ViMEhROV9xQ2g2OHQxY21OazlUNHlTQUlQd0FCdG1GVUpNWl82c0Nna2dCUUVralZpNzlBQUpobHRQOGhhUWNDaVpvNDFqWnNYUmhCNnVhV2wzTmpzcGhZSzBLNDhMWHN0TEhQU09Ddz09