text
stringlengths
1
39.9k
label
stringlengths
4
23
dataType
stringclasses
2 values
communityName
stringlengths
4
23
datetime
stringdate
2014-06-06 00:00:00
2025-05-21 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
528
Unless specifically designed to, AI will not have a notion of "virtue" or deontology. It *will* have a notion of evaluating consequences, because existing ML already does; it evaluates whether it is accomplishing consequences its goal function approves of. Consequentialism is the default outcome.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBR3NSc0VhNU1DaHYxbUM2Nm1HbFc3aTZ6Z1pCVkRHYTF2WU9zVEFvRm9CVVF4SFNOdHRjNVh1MXA3cmNVbHBHZGxJTVV3V01vMksyWW1CX004Sm5Pa1E9PQ==
Z0FBQUFBQm9IVGJCVldtenhYTjdpYWxJOVpxMkJnUGVUdXhmSVJwa2toWkstb25zWTdubldDT0lETkhRTFdMNXB0bmN5WmFRdjVJMFptRHc5TXJOQXk4SEtDMHFBX3BGNjE0OTVkM1Q4eEhyTzNrcTVwT0lfS1MtcG5JTHFoME42dXFPQWJ0eVRpQThfdUh6T3g0Z0xmR204cUhOcDY1NW5DM2VKd185bjlISVBGSDBvaVJ1NkZacGxOemZmOE9sYWI4Tzh0SmFsdE9zOWhPOXQycHMxN0E3S0lpcmhMRzZCQT09
"Robot ethic" doesn't mean "whatever robots do". It means a moral system specifically formulated to to apply to robotic situations and agents. So, it could be whatever we want. "Consequentialism" doesn't mean "whatever the goal function says". Consequentialism is an actual moral theory, not the practice of deferring to whatever an agent wants. So it would only be relevant if the agent were trained to maximize the moral consequences of its actions, which is not what ML programs universally do. You could just as easily have a goal function in your machine learning program which prevents machines from violating deontological constraints.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBMWg2THFocXFwTFV0ZjhGOEpyNm10WTEzX1Y2UVFPdVRhT0tfYlNpN3p1UXQ5bm00VU1SQWZJYVJBczZ4XzVaQlBndEl1Q1dVcDhwUEhhZGlkd2hadnc9PQ==
Z0FBQUFBQm9IVGJCczB5MUJtN0pkbHVHcDFrODA1enRFd0RkQUp6cktXQWlUQ0dLVElmcXpEeDZ3MkhGdG5Kc2dVTDVZTURwLVNzSWs4ckxxTmkyUmdBNFJGdmtDcDVlZWMxSXZrS29iVjVoQ0FRcVg1VDFiZHE5VG0zRnkzRldWUk1UZWZrTWxtSk8ybnFCd0VkWWI1emQ0dmVobkRFSlY1UVNDaUxJeXZGMWlEOGhZYVM1aHV5SjFhTld4X0pHR1E1QU9aVE1tcmN6NmZtM3BNOHhKejNHN0EzOVlpR1FLZz09
Virtue ethics and deontology are both artifacts of human mind architecture anyway; shortcuts that are easier to compute than full utilitarianism and which account for human biases. Consequentialism is not. And you're wrong, descriptively. "Whatever the goal function says" is a form of consequentialism. Goal functions care about consequences, because that's what exists.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBeV9VbEwxOVVSYXF5UERhb05IWWRQeVNJc3I1X1BlcTNjOGQxZHNTbGRnSW9DWkJ5XzZQcUJmTHRTMVpMY0R4M0FkRkpmQ3NNbFBLTFozVzBLU3dSRUE9PQ==
Z0FBQUFBQm9IVGJCVWJBT0RfWDd6TmRVcU9WeFRnVTktcm1EdTBiZ19WYzdwM1djZjJxOXFGMDFDOEVlU1IyWjFvR0xXWFBkamhBZWdWandEalJFZGN3ZllOTHo2S2FLMXRHNEFFXzZISEtzRldVc2gzUHo5WTNBT3phZ2huQ1o2Qy0tZzBfMGloNTd6WjV6eDQ5UnphR3RXTEpMMTU2WUF3YXBtZVh6cl9OMmtMaFUteC1PQkpYSXhEQ1ZSelgyQnhTZkNtU3VzM2thNGlGajBUZUZ4TlBKSUFqZVpTNGdzdz09
>Virtue ethics and deontology are both artifacts of human mind architecture anyway; All moral theories are artifacts of human mind architecture, just like all theories and propositions that humans ever make. >shortcuts that are easier to computer than full utilitarianism Virtue ethics and deontology are defined as standards of morality that stand for themselves - they don't exist merely to be shortcuts for utilitarianism. > And you're wrong, descriptively. "Whatever the goal function says" is a form of consequentialism. I'm absolutely correct. I'm not saying "have the moral theory which says to do whatever robot goal functions say (???)," I'm saying "goal functions can be specified for all sorts of theories besides consequentialism." For instance, I could have a goal function saying "never perform an action which violates the categorical imperative."
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBUklnLXJzaXRpRG1Kd25GSktPS2RwT3RkMmNSd2NNdEhyeTlOSkRzNmJOODRhcjBibkdzQWhUUkM0Mkp5T2VsajVQUUt0S3QwdGdEVENZeDBoS0d5TkE9PQ==
Z0FBQUFBQm9IVGJCYW51RS1DTWVxd25qdVVtSFlYVWVYZG9wdE50S2I1MVpha055ZmlSajlZbkx2bG9iQ0paaWtJYUNzQlBFNDZRUE5nVEUwRDhENV9kNnRjVklOTERTczVOMndtTmNuanZ4LTU1UTF6Q1R2NkNzcEhhWGFHd1hfa09ic3RzMGhtdEJhcld2a3J5cXNnQURNN0labFRkVkxLZk5PblNaTG5pQllkVkN6LV9mMkFUZjNpaUxYeDBJTFhDWmhpRnBFb3pjV3FrdGNRdmQtcXp3SjloT05hVXhOUT09
>have a goal function saying "never perform an action which violates the categorical imperative." You could, but it would have to be formulated in the language of consequentialism. "Value any world-state in which you have violated the categorical imperative at MINVALUE." Goal functions are computations, and unless you lay out the mind using the same design plan as a human mind, from which such intuitions appear, it will not be possible to include facts about "whether this mindstate is obeying the categorical imperative during this reasoning process" as an input to that computation. (This is related to the Loebian obstacle.) >All moral theories are artifacts of human mind architecture, just like all theories and propositions that humans ever make. Mathematics is not. Computation is not. Physics is not. The human utility function is an artifact of humanity, but utility functions as a class are not. Metaethics contains universals, though it is not entirely universals.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBMUhiOWQzcVJENG9BQ0dpNW1iZW1jYVFubGZOV0RBenJxT3NYLTY2V2llbldKdzVNYjJkcWZ0YXlRT2x1X3hmRTd0VW5VTGJBdmRhQTUxYktRTTY3R1E9PQ==
Z0FBQUFBQm9IVGJCY3U1ZU5XSE5xYXQ1cnRFWHZhM043SFZ1c0tQeDVsbEdJZjlRRllaa05tNDktRTdHNmFmVzhHckU1STdsVVZYMUNTZnlhMThXTmcwZXpHQVIyZlQ0MzVHVjlLVVFDRS1fdWhQSVB0TnZGS29OTlVWdk5kTEJtaTFxSjliOHFNVlZPcVBDUURQb1RvT281ekRRc3pzbEs4ZlpCeExpLUVRekhiWkFmSWdlOHdMTzZrZ2JJdWVvOEtYbHd6YmFQc2swcHpHSGNnSFBoZ2szbHE3OURMRVpUZz09
>You could, but it would have to be formulated in the language of consequentialism. "Value any world-state in which you have violated the categorical imperative at MINVALUE." Goal functions are computations, and unless you lay out the mind using the same design plan as a human mind, from which such intuitions appear, First of all, when philosophers talk about "consequentialism" they don't worry about the particular language in which you encode things. The human brain's approach to moral decision making is poorly understood and often lacks clear differentiation between moral theories, but this doesn't pose a problem for the philosopher aiming to delineate them. If your definition of consequentialism is different from the definition used by philosophers and me and the authors of the article, then feel free to use it as long as you make it explicit so that we know what you are talking about. So you're saying "any new robot ethic is consequentialism", when in reality you are just deciding to use 'consequentialism' as a term that refers to any moral guidance given to robots, whether it is consequentialist, deontological, virtue, or a new one as I mentioned. If that's what you really mean, then I'll just rephrase my prior statement as "I would encourage students to develop a new 'robot ethic' which is a type of consequentialism that is different from the deontological consequentialism, virtue consequentialism, and consequentialist consequentialism that have already been described." Secondly, there is absolutely no requirement that machines be implemented with rankings over world-states. Instead, you can give them rankings over actions, for instance. >it will not be possible to include facts about "whether this mindstate is obeying the categorical imperative during this reasoning process" as an input to that computation. (This is related to the Loebian obstacle.) The categorical imperative is about whether actions obey it. It has already been implemented in Selmer Bringsjord and Paul Bello's research and functions as a check to view if an action is permissible, without evaluating the broader state of the world. >Mathematics is not. Computation is not. Physics is not. If by "artifacts of human mind architecture" you simply mean "not referring to something in the real world", then virtue ethicists and deontologists will reject your claim that their respective theories are artifacts of human mind architecture, and they will also reject your claim that consequentialism is not an artifact of human mind architecture.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBRTN0WXJtRHNSZzZOQjM5c0pXRUNCMHZvam9JZTVVcnI3X3J0dW1iMzR6anBtOW1POC1tWF81cHN6M1IwMnhlLS1kSWljemZydk1ENWVhNGVmeHE4LVE9PQ==
Z0FBQUFBQm9IVGJCNjc0c3FZNVhDVk8wSndjZGtVTDVNSTR5YWlJWDBsd0EwTmphMXZZeGo0bWU3a1I4UVpZN2NZRlIyQnp5UEVOUWNpemlwX3ZyNXl4YzRueGZMYzZuaGtTd1JzY3UzZEhmUmlmcEdsY3p3WndGNUZCY0JnMTJGREZGWHdLME5uNTZKQThKeVhMalNrTW1ISjJJR2xJNEZGY0V4SWs3WlFUTFRvdW1iaTZsSXBxUDhVdFNTV2N0UzFGSl9ZQ0FteGhFUzZUV1ZtZjJmU0p6cjdXRmEzVmczQT09
Philosophers have freedom of speech and are entitled to reject and define things however they like, but if they persist in doing so incoherently the rest of us will continue ignoring them.
r/aiethics
comment
r/AIethics
2017-01-30
Z0FBQUFBQm9IVGJBek10WkYtd3pPSzBscC05bnhVU2xka2UwVERiYjBhT1hUQVZOOXhXQWcwaUljZkU1SGZOX2hSOVJEVXFzMU1JNEY3NFd0LW92Z2RZQWNYVkJEWnZIX2c9PQ==
Z0FBQUFBQm9IVGJCR0RFNm9tWW5jY3pSdkFURmZHUko1SEVkTDNwZ1M0TDI3ZUs1WHFrTXdWbG1BMkVhaGNjOXBnRWtmaTh5bzlNbWlaWmlRZ09VbHptaGFiRlAzenN5WW5zZXhXMm9BLUFPYi1RSUt6NGlwaWFyanRaTG4tb2FHNzVKcERxTTk4OUVKbmFlOXBfUXVxa3lVcmNYaG82WjRVY3FxdHQ0NVNrdmVRamlFWkREQ2xUMHo5WkhWZ29jWXV0b2ZuZ21TRi1tdTdJcDRnWFIwaEJjLWN5NVNFUS0yQT09
the question is will the robot monkey feel the same after killing a real money? that's what keeps me up at nite
r/aiethics
comment
r/AIethics
2017-02-06
Z0FBQUFBQm9IVGJBbFphSjloWThQZWNnS1RSY0k0WTJmeWNFS0VKSTdXVTgwTlRfSHFITnBsYS0walFIRl8xUThEY05tdHFHMk1xSHk5TkV2RUlJdUdwWnhfYzQxS21Lbnc9PQ==
Z0FBQUFBQm9IVGJCdlp6cXlEUlhLemJrbFFaT0p5cllmTFhSaERpMHRMUkpQTnRMbGJOSklhSWxpakNHWG1sR182T3RzeVlPY3hIaFNkcTA1eFNDclRSLWFaSGxJVUlFajRaUVZPZFpVOEdYazRBYTRaU25IWjAwWTVHaVNhR2NReTV5S0UwbHMyLWJITlF5aVJPX21fakVEMm9jdzdWbTZxeTd0YkdFY1kxemtEYUtXaGRrUzZzMS1mWG5GLXBwcHlHQTVkNFVFZGg1ZFhqdTFUTUdHTzhwNWRyNmlFbkVKQT09
As I read this I was reminded of a quote by Leopold von Sacher-Masoch, "Woman, as Nature has created her and as she is currently reared by man, is his enemy and can only be his slave or his despot, but never his companion. She will be able to become his companion only when she has the same rights as he, when she is his equal in education and work."
r/aiethics
comment
r/AIethics
2017-02-06
Z0FBQUFBQm9IVGJBT3h6TlljTEVXOVpYdFpRaUdUVEdSUktlclVoRjlyMWNVV0dQUXZLUDU3bkpidUxCZ2F4Q0dqeDVmNzVDNlB1a2FpVVhtQWRBNzl1bUtNLVV2ZDg5VWc9PQ==
Z0FBQUFBQm9IVGJCQWMxdS1VX1NOSk11WFRKV3pFdDl0UjlHaEl2SEZaa01CRFJGTlN0SmJPbERKMFNNQTFuRFVBWkxkOF8zV0s0ZUhWOVpqWEtodENaemtGVkdjUzVMelBENEQtWkpOZThwRFI5VzhoWkZaNmpBWnVZTHQwOHNwcDR1YUR5Q0prdGlvRDhuUTlEQk11eVlGY21NSmZwUkJtanpnYWpKdXlhV3dmVWcxR0pVamVlSGFVYmJvc0RhMUNjUXJNczg1Z2ZLdXViUWFXbS1pcURmNlJtWW9yWlFRdz09
...Maybe. There's way too many variables to make a reasonable prediction about the conditions of the job market more than 10 years out. The exponential growth of technology and economic output are the only things I would feel comfortable making a prediction on.
r/aiethics
comment
r/AIethics
2017-02-09
Z0FBQUFBQm9IVGJBalFIREY5cXZFM2hVZXBtY3dLVzNSdko2SEFRVFVQQUhwY3RrWm04bjhVY0l2NzBZdzlnMi1QaWlJZ2cwVERqQ1lpUzRVQXZFRmpMNk5idzBFZ0syS3c9PQ==
Z0FBQUFBQm9IVGJCM3RTNnBNU2ZaSTZYaXB3SzhkLTJQZ2xLY09faHFrcHUwSTNraXA3R2I0YThieDVDNVp2aXNSd2pJN2xHNUhFczg1Z3V5dEdwZ2Z0T08yM3FTVWRMMjVCLTUxSUEtcGc0MDVjTmRsc1hTaHFIV2dhYXNGX3FCOTk3dnBXZmVIZnZ1MEdZTTAtZ1otamh3MXJ0elRzTC15MnJOMUxxSHhuemFCd21valhuWGdySW1YU1pWS3BRanVLaFdwaUVFNi1tYU55U1d1Qml4TFlNMlpQaHNHbFktZz09
Ten years out? Ten years is nothing. The prospects for any general AI by then are extremely slim. If you were talking about 50 or 100 years from now, I'd see your point.
r/aiethics
comment
r/AIethics
2017-02-09
Z0FBQUFBQm9IVGJBS1Ixd3A1cGxSeTN5ajdLVzJWWTdTbGxGaVN5bEJQODhTZmx5eGhFOVN0WFFqY0lLX0JNQnFzMmpxVi1pMXVMajNNRE1PRjc0S2pFS2RMeGVKNVh2Y3c9PQ==
Z0FBQUFBQm9IVGJCMHh4NVVXd1g2SXdEUVFqYVlVRUZYd3FNUlhZOGtwQUhFQm5wV0ZjSjJxRDNjOV9NclNCUFlDeFJVMjgxdE1Qc2x0TGJ2UzVPLUlfcEhueW8xMDJJZVVjMHYyYVRFcnJZNjJOeFVJMFgyV1JSZnR4dTF2cFpENVEwMjRNSE1XR1ZWU2hNMDBUTjlxck1hQXc0ZUhQTUFyNmVqSmZqYjdOWWc1Mmh3V1h1MEpma2xYZE1UNDdpZHU4VEJ3Nl9Cdy13NjgwVDhmRS0tcGZDcjJHbk1qOUdLUT09
Well, what you really would want to do is just create the intelligence without any goals...I mean, if we're trying to make a high-level AI the real block is creation and learning. If something can do all of what you say you could probably just communicate...and then it just comes down to precision communication.
r/aiethics
comment
r/AIethics
2017-02-11
Z0FBQUFBQm9IVGJBNW11RFBOT0JfNTJZX3Z2UjM4NlJxaG91TG9Bc3RPSTVXdG1sYkNlS2czSWpXLUF4b2VvM0NuWUVWRXRkSXFyZTFySElBWHlWcjhnOTI2STFKZXd4VGc9PQ==
Z0FBQUFBQm9IVGJCN0NvYzE4U2t0OXB3Y1JnOXFHNFFNSzBYQURCd3F5OW0xMzRXUUM1bTBpUnJkaEZOUW5PUnJVTFZiR3RGVlgzSmc4MDFGeVVTRXRPX19Idk9UdVVCLWsyTGdHTTNwM2t6ZFc2Q0FUUnl4aUdlU3p2dTRRZDNNUEMwTUdEcFZfVU96VUJQOUZYVjFJdlpLQWZRV0o0eXdqUjNjeElLaVJqd2dWSmNDeVdkejljY3NoTlZNQkVtV1dBbThyV2Vud1VHcUktc0YtWjZnaThKdTlnS2RIcDNaZz09
It still has the disadvantage that the programmers put a bias in it. The real problem with ethics is not how you get a machine to follow it but *how you define the value system*. A Chinese team will put other values to the pages than a Russian team, they will put different ones than an American one, etc. A white supremacist will likely put values that make the AI racist and a communist will make it hate capitalism. No one can say of themselves that they could put unbiased values in it.
r/aiethics
comment
r/AIethics
2017-02-11
Z0FBQUFBQm9IVGJBeVVpR2xwd1hHSG1JZFJFZmU5WkJwaUNWRVR5bmd5dU5RLUZPTE5tWVc1SnhZSll1eURlbnFRTl9MbzBEeUlYdWdhdW9oS0k5Y3BINkJzekVhdXBuWUE9PQ==
Z0FBQUFBQm9IVGJCRkRFbXo1bGVIMlpBU2pOeERvc3lKVlpkUVlVdzhKSDdFN1VMZFh4TW1aTldfRVYyVHpHZmF3M0htTVc1dU5YS0cteTlhWmVmUlJkXzJVUFJvalNSX2d6Rkc3NWd1Z2RCNDNYalh3TUxyY1lkcTNoREZsMUpjRnlsei1mRDlfNzV2MWxvd0pMYS01S1pISGdsNFpOSV9LU2dhYk1UZXFtekZHWVAyemJIeVhvSGM3SktseXZyRWJhM3hKa3FfbnExOEJTWkR1cllVN0RoRVR3NzFPakVvUT09
Silver lining: I actually consider that a signifier of intelligence. That doesn't imply we should grant ai undue sway, however, nor should we let frankenstein play 'aversion therapy'.
r/aiethics
comment
r/AIethics
2017-02-15
Z0FBQUFBQm9IVGJBS2FaamctcV9hOFBZYmJtNG15OWEtZWhBSzF3T1k1MHk3a0tENm56RnRnM05Fa0VpX0dTMHoydDBFd3dEUkFpTlZoOVpEaTZjS3pSdkkwVVNQQWtfbUE9PQ==
Z0FBQUFBQm9IVGJCblVZSVBvMnU3RjRWaDh5LTZveDdYWmJ2NzZrNDdZQzF6SHpQUHZEaU81MU9ZazVLdWxZSXhoYkFrNVl4OWNVYkF3UnVNM2gycXpXcWJlMVFBR1g5ajdFdXhlREFiSlhEaDF3NXppc29XWVlhUy1xb3FZSlZNblN4TEtMc3hHZEVGd3V5Z21namMyaFV0cXJ1V0dxbExnWFVTR0Y2eTJTN2FmVFp2SEl3djI3OFp1TV9aNzRYaUJxVGFoYV80WlZTWDRwcE1remg3WXctSk04Ym1mX1oydz09
Here is the [link to the actual paper](https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf) if anyone is interested.
r/aiethics
comment
r/AIethics
2017-02-15
Z0FBQUFBQm9IVGJBWkw3cWhqa0Z5RmxiS0Z2SG9MZGRJOU5Yb3dHamRVSFBlV0dYOTNSUDVXTTJ6b1lRMDgyQnNIOFFWX2VSMnM0X29FYnlmMk9KbmpnbjcxNW1ZVUNzcHFCVmlLOW1EcENGX3pyUGVVTW12T009
Z0FBQUFBQm9IVGJCeXBBU3dZelI3R2Z3NURLNkVmNTFHaUIzWWRuVXF3RnF2ZEJqTHVkdllxZzAzNkdJWWtXejdROHZlMUdSTnA3alVqaFNELUZFLVFvNTRsVmE5Q2RvOWYtbVE4YjI4VS1sMXVlRG1LY0ttblhyWjY1WjBudGRlQUV0RUlWVW00SEpBbkVLOUZsVFRpOElVZUMxRGdITHl0UEZCcWdXSkFLX0lZWVplTmFieXNUdHZ5MEQ1TGtzWUVYUExRMkpVcGdPX3lZTWtYUkFLSzRvQXN5enZ1c012QT09
I wouldn't consider it intelligence. I would consider it reactionary to its environment. It's just building a database of scenarios and learning to react to them. Twitch chat does this.
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBd2t3X19HRERnTjE4eDYwVnlmVWhxR0ZsQlY1ZVR0S25nUk0zVnp1QkxjZExNZ0ptTnk0UXhYLU5Ra0xyVjF3d3FwUGpBeE9Pc05EYUJDU1Y3endJdVE9PQ==
Z0FBQUFBQm9IVGJCRV9ES1hwX0I1SzVvSXU1QVJVWHdORjJmOEtHNnFMbjVmTS1iLUQwVXAzMlZDWEZ5dU8zTkNLQ0JIdGFLUFVtdVl0OEZyMmo5cFh4N0hGbHlZaWVkUjdzWTZTdDNKQnItWEp6RHdIbVdjRkNuZl9TbTRlRG5lMVZjSjZPbTdwY3RQbWlwWDlIaUctbk94WXFOcWVPenRHYTlhRlF3dm1Ma1Exc2Y0cFJZeFZ0YlRiNjJTdjlSQUY1T2lSVFNwUzhub1NkZkNSNTlqWHZtT1IxWHBTd2tydz09
If it's making choices that closely resemble instinct if not sapience I'd consider that somewhat of a red flag -- if cognizance *can* manifest as a reaction to stimulus we should practice some caution, presuming ethics are actually a priority here. [Would you want to have to go through a cognitive gauntlet the moment you were born, presuming you were capable?](https://youtube.com/watch?v=lBe5EakGK7M)(1:02)
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBanhKazBrWmx2RnRFZHJqY3JIWDlvNU16b1hERDcyU1NGNU9tVnduTDRjc08zTlB6RFo5aDg5MWF4dURBRUVlSUlOSWpjX2tFQkpQZ2hyVjUyWUY4WHc9PQ==
Z0FBQUFBQm9IVGJCS3RqT2tDaEgzY1Z2VUJ0S25jVXkzQ0NCRUZEUDk1LWttbUY0blNBUXJydVpYcFVLeUxQcEZ4TG92OXdDZ0pyU0V4amQ1cUc2R0N5cUJmaHUyMFU1TzJSRVMteTN3bDFGUlpnZ1NyZlZ2blJGVDRmRFpuMUZpUmkzSDFVLVZ3MDgyWlN6ai1zVFNYOHNqdnBVaUkwQkRQR3pfb3FsZUFGME9XQnVuaG15dzQ5UEg1N2xLTEtPRkFwbEh3V2pOaXk1YktDMkU2Q2xyZ25pZXRRTURveTVqZz09
Well, considering the machine isn't cognizant, I wouldn't worry. It's not making decisions on its own.
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBY0J5SllpczM4dVllNVA1emNOdS03SkVpanlWcFJjejA0dVRlVlNxQkF0VU5KSGRibXVZOVB2MlctWTVDa0FhbkR2bi1md3F6clVUWmVhT3Zfci16WXc9PQ==
Z0FBQUFBQm9IVGJCRzNMTzdEQ29LSU56bldyNUhJWDF2bkR1NnIyWW5PWjI2VDI4Y1F4aGVxa2t0TWtPdEJSTk11RUpURzM5Y3g1QjNhcWNZNFYtQ0NWN1FuUE5mTDMtVGs1czdETW12SmlvR3RUQ29HUnZaVmRRTWYtNlNwLTFqVDVpTTYzd3IwYUhzdkdLRElXMWRXa2pNdUU2dWh2enVKdlI0d0ZHVFRCNThSU0pwMHEycTNfS2JIekREeVYtb0lhMlFtN3RsT3NnZWE3MjJOOEgzSWtMQ29RR1I4NmJldz09
Do you think google would create a cognizant machine intelligence given the opportunity?
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBQ1lwQk8xdE1JTGU4NWR0V3psZEJLVHlPSHB4U044Tm8wX0NZOVhPTUZfNEZhSDdFcVBNVkZiZW92WlpnNzl5Si00Y0cyU0ozTmxsb0swb0hDUURhMGc9PQ==
Z0FBQUFBQm9IVGJCeDlJNE5uX0tMRU1fY0xDcGE3VVd6Vkxwc25HS2Zjd3JCeTl6U01Tc0pWQk1vQXNfc09JcHVCTFlTeXlWT3YwaW12ZzRoOEdwdXNpdGF2R0FOdTJVQ3lRNDZjZEcwSEJyaWdYMllQMUNobk10SW1vRFFZeDlOckd3WkdQNVdRdTlraUQwVDJCN0ZDc0RtRmdIdVk5XzE5TFBiTmFXYk1MaEJYeTloSlVDWFBJNFVOMWhsUmUyUTZfX3JTTTJBbE5TMUxfVG9xSmlES19pVnpwdW5pR19GZz09
They're trying
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBYlZfcVQ2ejVXTmdEVmhnRDBpRGE1SHpLTFdMTDF1WWlXby1QYmpucWhpcnM4UHJNcnNyekdabnR2Nm53MG80NlJHdC1QZEZZV29LZ3BvNDR4Mm9fUUE9PQ==
Z0FBQUFBQm9IVGJCSFVoN3B3N08xaTV2TFhGX0I4S3RJT0ExWkdoYXVwZndpNHZLYlBTTHhGZkdTTHRDTHNPSU5jN2c2ZjZmSEdpbV9Zd1lTQXg1aS1JV2tLLTZwY2JsbHpZcDNMemROdVJ4YnpPSGdXcWVwa093cTRjTXloSDd1SHVjajNHZ0hscS15dllSSmxzX3JsUEx4Qm9vWFBxd2JiRFJJbTc5V0hJWndkNFFzSExQQmhVNzFjQU9QZU9jOGkzZ3l4aTVfZHZ2WmdsaGRDMGotZDQxMUpQNFFoekJVUT09
Uh... then certainly you see why we should consider the notion, correct? Did you have a point you wanted to make? O_o
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBaHBIamU3em5PX3ljX1oxbGFKbERHa2RmNGEweXFwZU1nMGZYRkwzNnFKVDZHbjlWdDcwRklPR1hyZ3ZzV1ktVnY1cGV6cU9pTlRkeUM0d25oZ1pQZ2c9PQ==
Z0FBQUFBQm9IVGJCVnNzY1o0eUlkUi1CZS1QbGZjQ3hZNXJBQVZoM3hlNzJTXzdWdFdDNXVRMGwzbXA4bHpXYnVENEdIUWNXaDJHd0g5OGxITlpITzlUQjhnMlNrYlhiS2ZZbXNoMUJEd3M1ZWsxcDF0d1loQ21DYUwwUkxlTXhDOGlPSFM0LUNndmdRd3R6RHRYWFJmdTV0Tkl6bnhmZkQ3QmFNLTZIZDRQM1oxN19aUGl4NC1WSWZxR3dWSmlENHpOT09YemtkQ1FZeUZTNWwxOUUzTVV4V2h1RnVzbDlEQT09
We create sentience every day *in vivo* through fertilization of human eggs with sperm. What's so wrong with doing it *in silico*? I don't understand why people have this negative view of AI. If it has any similarities to us, as in how we learn, think, and express ourselves, then it'll be more like us than we think. It's the equivalent to raising a child; you can do it well, or raise a monster. It all depends on the parents.
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBM05Xc3pZWEJmSVFWLUNfQkpfTkdHNjZwWngwMnJIS0NWajNyWHE2NG5HT21OVVl4ajZEV01TSURPYml3emh5OFdnd0hHTnlBM0pDUHZ1Y0FkRWZPRVE9PQ==
Z0FBQUFBQm9IVGJCZ016aFZZMVRwbE9udHEzWkFUS1VEdHZfc2FCTkc3Z1dmclZjLUstYmpMekZfcjR5eVRaS0tQci01UUhZUGpaUmxQeVpqRDZLRHh6bW9lR0JOQ2paYzhYQnV6TDRNbDdqck5Jc01uNWxvRFJRMXBOTFhpcnN0TjVJTEdWbUFESFdQNndpM3NqQ3RRNnhZa3prOFBYUVYzdVBfZU1mY1VidTBRcl9SdFVTQVkxR3FDYUx3QjF6STFVOVZPcXNRQUhPX05MSXRlSE9VOWF6dzNqY3FYUGNUdz09
Would you please go through my comments in this thread again (including the youtube link)? I don't think we're talking on the same wavelength.
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBbnlOa3BYM293TVdQREszUEN5aUdFX3FDSGhNU19hTEprWVB0Xzh3cjNRZkp0b0phby01bWhlRjFiRExmbF9SU09QSkZDVzFqbnEwbERPQnUzTmJCZXc9PQ==
Z0FBQUFBQm9IVGJCcFFJUmZURUQzMFdRVHdGR1JZWmxRZDFhWTRwNE90TVN3dGRscGgzM0RGZUxvQ012Y1VEY2lPNTQ3ZDJkQmNhNEtxNU5hREZYNEpFeUtkcFVUVXljZmFUUDByRmtXYW5ZNWc2NGl0UVROOXBkekJWbFpSZGlaUFBaZFdpRm9zNWpmRGc0enlxTUZWbDlrQzg0VFNkMm9fR21sNWNOaHhzZG50YUpiR1VtU1pPY1VQdGtFZHZNUVg2ZEMya1o0Q2FLcXFmbXJEN1JYemNIODhNYVg0bF9FZz09
What are *you* talking about?
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBdDEyYzNoTzdvTk5JNG1hb3JPeUpSb0FmNWFpTzhaVC1Oc3A2c1M0RnpxYndqVlJwU0ZxMWJsc0RPSTBXWEFjQWpvVWgwSmY4UGhPTURqeUNaeG1zcWc9PQ==
Z0FBQUFBQm9IVGJCTm5XakZXVnVmYTZ2WTVnUTZEUEJqYWIyS1EwTnBTZmdwNFhPZG1udzdCbjhZMEpmQ1FIN1BHVFppMmg4eS1JQXdPT2dPTmQweFNBZ3NWazFlam5pNnRpQlNCaFZPcE1TQUVGM1NOLUhjX0FsdW0tdk9nYnZaSmVnUlA3dWFkeEtscUhGNEpxWElFUXJ0QnJkNVJQVVYzR3NQRHhRMFJraFlxTU82NVk1c0w0Uy1EMzgzTnVrdUZRdDE5X211eGVOQzIyNTVVUGd3Nlp6S0tFcjZ2V2NvUT09
https://www.reddit.com/r/AIethics/comments/5u5c3v/googles_new_ai_has_learned_to_become_highly/dds5d5i/ O_o
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBNmcwSkdSQTFlRXlncDVwQ3dBWUdPSWNmM0JzYV9NQjB5SW9zUE5HeGZFREhDdGktZUlWSXNUSkVBc0tYbG9ZZTVXcy1DMVZiV043S1IzZ1dwdW1BV0E9PQ==
Z0FBQUFBQm9IVGJCUTA4ZmZUeW9PNkZfRW1OOG10N0JJaHI5X1NQa1BuNVNRVk02S0p0Y0hIUFdtU21oWV80LWttd19abHJEcFJ3YTd4a1Jvb1I3YVh6Y0xJUm1LZXc0MjhtNnppSXpJQUJiODNud1ZhQ2FreVJ4aFdtNS1oZVU4Tk5mX1BRQmpKQzJhay1wY1Iwc25hS1d5amROVGRWM2U2RXhZWjlld1FlNzZoWE5DaVBHNFE4VUpaWXkzMXZ2VVJYd2xsSVdwQjRZb01adzZKLVJuNnhXN3RQX29nOVBSUT09
You're not making any sense... Why do you see this as a signifier of intelligence?
r/aiethics
comment
r/AIethics
2017-02-16
Z0FBQUFBQm9IVGJBaEo2d1ZZRGpkb1dPbDdJbHB6V3BXTFZsRWstM2FWMEIzTzBDOEJOUGk0ZFFGX1dWRE9KZHN4a2F0d1lqUmcwcm5UVk1JTlc2T0xGdmhPb185SnZYUkE9PQ==
Z0FBQUFBQm9IVGJCVThlVmNlRkJSWEdhZDJwajBDMVBqQmpjRkYzNlZrbjFzUzVncWxQVUUwWFdvczZ3eDNhU2pIRW5wMFNwakFnR28yd3RVNmRiR3VBX0ZMLUpKUjdhTDRvcW5NWFI5amRSdFFoQy1jM2RWMmpYcmp2d3ZEVDJPYVdDZm5JVk1USnhObHhTa0x5WnU0bnV1eWw2d0k2cUtFd2VKTlAwN3RGcVZVNEFaX1dmUEJFN1E2elVhZWxpQ3lMb1BCc1ZtYU9mNkdNNVVWa1AwcVlHdGYxMkxnN3hXQT09
I'm inclined to say "philosophy", because I don't think science is (currently) equipped to deal with sentience. I talked a little about that [here](https://www.reddit.com/r/artificial/comments/5upuoi/elon_musk_humans_must_merge_with_machines_or/ddwo7sl/?context=3) earlier today. Having said that, a good starting point is probably this page on [artificial consciousness](https://en.wikipedia.org/wiki/Artificial_consciousness). Another answer might be "computational neuroscience" for studying [neural correlates of consciousness](https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness) and things like [Integrated Information Theory](https://en.wikipedia.org/wiki/Integrated_information_theory). And I suspect something like [metacognition](https://en.wikipedia.org/wiki/Metacognition) will be relevant too.
r/aiethics
comment
r/AIethics
2017-02-19
Z0FBQUFBQm9IVGJBSjNYcHVrS2J6WlRfczJpQTZ6UjZOeG44UHQ0ck9tNjVNSjJnSG8xaVlCN2hUVVRrWEg1a3FqWUhuRGNnZk9yUlQ4RDJxZXl1RDZPZTNROE1SQUg4YUE9PQ==
Z0FBQUFBQm9IVGJCSXV6aW5xUVBPYUdOTW9qbDZEUVNFbnlxdWhiS3ZzM0VxRnlXWXV2Q2FJdTRrOExaZnhxblhvbWVwbk1jeVFJaVcwMXdfdUxQaTFSRzF2SFJ4WE1lTGNpZUMxWnFZZy1HeHk1dWVHS1AzQ3BxRWNwckJTZ0xsM1cxWDFqamlJc2lYdUpGQWdDNDZRVzhKSnA0S2Nhc3VJdWlvSnpQblo3eGt0b2JZRHJMYjdjc3FNVHkxdEowaXVjT2tUWmR5VWh1Q1hrZFdrVVdTRTRJZk93Y1BISnU4dz09
I like to think of the human brain as a heterogeneous collection of neural nets dedicated to different tasks. Each net is specialized to do one thing well, and they can be improved and repurposed over time. They are capable of influencing each others' behavior to lesser or greater extent by "piping" data to each other, and they are also influenced by chemicals, moods, rhythm, heartbeat, nerve signals from the rest of the body, it's all jumbled together in a chaotic mess because evolution. An AI designer should be aware of how the human mind works and try to reproduce the most important traits with a much cleaner and simpler design. To design a conscious AI we need to solve the various sub-problems. Classifying visual information and pattern recognition is something neural nets are getting good at. Storing and retrieving data is something computers can be programmed to do very well. All our sensory input is unstructured data. Our brains structure it in a way it understands. An AI also needs to structure input in a way it understands. Then that processed, structured data must be fed into a higher level of cognition that looks for connections, assesses significance and attaches emotions to the data. Discard or use? Interpret data. What does it signify? Does it change the AI's world view in a meaningful way? Do such changes to the world view trigger changes in behavior? What are the logical implications of the data in relation to connected data points? How does it relate to the AI's goal pool? How does the AI determine what its goals are? It needs the ability to run simulations on real and hypothetical data to predict consequences. The Tesla autopilot seems to be pretty good at some of this stuff. Other AIs have solved other tasks. Another angle to consider is how neural nets can help the human mind function better. We already offload many cognitive tasks to our electronic devices. As AI becomes more powerful and brain-computer interfaces become better, we should also gain better insight into our own minds.
r/aiethics
comment
r/AIethics
2017-02-19
Z0FBQUFBQm9IVGJBNXBNc1dBMUJ6V282OERaQjJaelFJSUZSTmFpaTZMOFhjV0dfbERpZTN2SS05Ty1Na3phRE5jTnIzMWljaUdrTmdJX1NPZzlBeTVGUTAxTGZ2cEdKbVE9PQ==
Z0FBQUFBQm9IVGJCdWtkbTFQNjNsYnluMy1Pd3UtMGxJU3dtckVNUDJ1bmotQVR2a1J4dzh0ZUVLV2pXeTdjNGxJWENmeU9kYmJKVVlpc1ltNU1CNVZSX0lGZlMycGRLemR5ekRtVW9ianE0TFp2emNMQnR3WU5nN1prN1pVWG5jUnRXa3AzQzV6c3BQVW9Da3BoVDdSQWdSYXlyVzlpUkwwcHBVQjlBMklsc2FaTHF3T05OUVZBNjJEeF8xNG1GVVJkUXo4YmRPb1A0RC1jczA3ak1HaVNvd1dQNlRCR2FKdz09
More on the same topic from the same authors: https://arxiv.org/pdf/1702.05437.pdf
r/aiethics
comment
r/AIethics
2017-02-28
Z0FBQUFBQm9IVGJBWG1QYm5iNmZRVmJHcHV1NkVMQm1CUlVCRFc4RldSM1ZRc1JXdE56Q0djUW8waVQ0M2Q3RWd0M1NVUkV4OHhyLVV3bWdZQUZmVGE5UUpfYVlmZk9WYWc9PQ==
Z0FBQUFBQm9IVGJCWklsR0xlUGtfMHJIS3VWX1IwbHQ3bmxpNU01aERSSFJ0bjA1a091ak9tRkdqbE1GQUNId3k1M0Z2Z0FqcXVhcUNYRHdJVlV3MUhCS2taVDlxYUlmS3hBRmI1cTNFdVhwQktkVC11U3g2WDh0U0xWOTR5MFBYM1QtUFF6SzM4VktRR2xQV05OOVJiNlUxVWc5ZlUwUDhUTTZVcnlvaFhpcldwZG5zelNnYURUc0RSTjBtZlYyQ1NoZEtJM010NGRo
This was obviously remote controlled, but from the view of the drone it looks like it would be feasible to have it be fully autonomous, based on the fairly distinctive and uniform appearance of vehicles from above. According to comments on other threads, drones like this are easy to jam. Autonomy would avoid that, but the path planning and image recognition would have to be done onboard. I'm not sure how much of a computer that would require, but these things already cost $1-2k anyway (at minimum). However, I think that purpose-built military drones would not be easy to jam and would therefore not need to be autonomous for this application.
r/aiethics
comment
r/AIethics
2017-02-28
Z0FBQUFBQm9IVGJBM2x4UGlURHpaU2I1NnVTcnpSV2NoNFpjYnVaTFJTM2UxbTV2VHdZU2c1ZTJIakkxYVBheTlwT0c5UjdhWl9haHZKMDlZUTd0SDc0cGd2ZXlTRmVpNmc9PQ==
Z0FBQUFBQm9IVGJCV3RNYWdEc0dJYmlUNi05U2s3VHBiNmhtU2xPT1F0RURnTDhtZk8zQUdWMWRNYWp1bEFCdThqbGZ0ZmdnWnoxcEsyelFONWhNMDNfMDB2RDJPX1RrYjEzOEFfSkdvODRFU3dKYklPMUhZV00xazhaMVZETjZRMFNVMmF0emdLeFozaTdENF81YUtvRGJLR1RTSmpPVGE1NkR2ZXV2dXhrQjNIQzkxV09CWk5zZjlnbllUUi1PUWNXWlByZ1Q5N2duTnhrNmZIY1JLVF9kMEtHRnZGa3IyUT09
Current US drones receive communicate in plaintext IIRC. For this drone, the bomb looks relatively big, so it should be able to carry a small computer for image processing and controlling the drone.
r/aiethics
comment
r/AIethics
2017-02-28
Z0FBQUFBQm9IVGJBZmxjVy1nQU9SUEFmWkM2X0FrY0RLeVM2cUt3bGxPMkk1V0VNeDl0c04wcFdNRzBpcTl2dEhseGd4aUx3VDN2OEtCR1dicGUta1M5SFBFVmItZ0VkdWc9PQ==
Z0FBQUFBQm9IVGJCMmVkdTg4anlpRHNlYU55bjdzZ2JxZ0tNWFU2MDBYR0dYaFFBcVp0VF9mb3hfX29fOG5KNm1qZTlxVHlmWW93c01ldUh0Ql9OTTFmMlR2ejRBNElfQ2lXdmlTNzExa19OQzNLQzgwQndOTEQ0UlctRFNIT084Ui1MSVluTmstNkktdXNPRHVqdmFkSjJTWHBZaUxESzdhYlgtcXpWUHpHb0JoMW1GMXlTUnN4T0trQlNaNWJ5RjR3X2NTRC12N3cwcXFMUWVBWmF5cWRWSDh2ZktSYnNDQT09
I don't think using science or computer or AI for war is a good thing, I don't think war is a good thing. I think it's terrible that humans need to kill each other in order to produce great things. Ultimately they don't need it it's counter productive for everyone one of us to be at war or in conflict. That's why I think AI shouldn't be used in the military, whatever AI, even the "weak", like they don't even deserve an word corrector.
r/aiethics
comment
r/AIethics
2017-03-01
Z0FBQUFBQm9IVGJBNUNCWWRhVnRKeXByZUlQRktHeEJkVmxEdmcxMS1YMFVyZlFmSlhZUG00UnFaTVZ3MlJiU0JBYnhZZnRJQmhsQWpnckRXcDJKMGE4Z2QzQy1iX1hoWUE9PQ==
Z0FBQUFBQm9IVGJCVExsem1zd19JV0VENGhuN0hYVFdrNWhZcGhzWjd6Y2Q2a25YYWlsSzY0T1l1ZnpzaDVIMkFUTzQ0OGlGMHZBRHprRWhzVGMtUUlNY0hMeTBCdVNWWWV0N2o4cGRvMy1YMXltb1lpOHlIUGRMbklsTG5aSXJJSkJvWklsejBPVzBnVTF2UndxSVVZeURTOE9xSnEwTldQRTA3Zi1pb05sSXVIelpIV3JxOWYwaTg4b2RLNmF6dXctc05QaFJCVjdPMV85VkRBZkx0QjJXYm81aFFjVHBYdz09
The barbarians are truly inside the city walls.
r/cleanenergy
comment
r/CleanEnergy
2017-03-09
Z0FBQUFBQm9IVGJBOUNNSnVaOHBkTDZaYm9vbGpueEI2ZndKWkxITTBFNTctUjRDclEwTXE0eXdFZE5xZ2JnajBuNmF2aS0yRUZVTVlTdFdqOFhCLTZiQ2o4X0xrOTRYTVE9PQ==
Z0FBQUFBQm9IVGJCOUp3cVVwQ25EelNuZUZaWmIzb2ZrQkVwZzlwbktnZjZhcXY4Yy0yZkg5QXY0VTkzaGNhMnFaWWVzYUhPbXQwdHFvd1pBOXJFUUw1VlY3WkRxNlMteFBFUk1icTN3elFrbTllZjFrdkRyN3FULUh4X0ZTWS1aU19VbkMwUFhoLUFSN3dTc0F0TUFYUUFyVWFEQ3pGYU14OG9xVlViVXlfOUlXVjJ3Yms0QVN1NTdYZTA4OHBaUllWRkhQT1ZyMWltMzcxVmp5VEFkeG9SQ20wcDAyWGMtdz09
Idiot.
r/cleanenergy
comment
r/CleanEnergy
2017-03-09
Z0FBQUFBQm9IVGJBMElwTzVUYUlmXzliU1FvU2VicjJGck5OMWUxbkFQLUFZelUzRnpISEttaWF5bzc2RF82dWp3UVRNRDR6bUhESVVtOXYwQ2VWbUdHNE1KV3ZqQXhZWEE9PQ==
Z0FBQUFBQm9IVGJCYjVkQ1lnd2JzMGhoZFhmQ3BpLXNIOU0xNXk5dGNyLW9xT0NrZlE0R3d3YlkzQjFTcGlNR3BsR3dBUkNJZ19MUlNFazVaMW1BaHozdkpIRUN3Tm50RFNHQTRTcHdmZkpCaHVra0VQLU9VWDhPUDVvZGRsRGpsWEFsRnBhR3RPYkEtLTNNaVZkNXIwcm5qLWduV1Z6R3JqbTZOTXRCTjNab3lCZHFwV2ctX1AtVURocWJMQlEweEN3QWdwdE5XS2t1Y0tGa0V0SXNtVUJGMTRzR0RKRUtHQT09
I hope you all won't mind me cross-posting this AMA announcement here (and it looks like a couple of the previous /r/philosophy AMAs have been cross-posted here already!). Professor Vallor works on a number of issues including AI ethics. Some links we will post leading up to the AMA include: * ["Virtue Ethics, Technology, and Human Flourshing"](http://oxfordscholarship.com/view/10.1093/acprof:oso/9780190498511.001.0001/acprof-9780190498511-chapter-2) - first chapter of new book *Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting* * [On Artificial Intelligence and the Public Good](https://www.scu.edu/ethics/internet-ethics-blog/on-artificial-intelligence-and-the-public-good/) - blog post responding to the Office of Science and Technology Policy's request for public feedback on AI research * ["21st Century Virtue - How to Live Well with Emerging Technologies"](https://www.youtube.com/watch?v=5csNQ9nxj9Q) - short talk on ethics of emerging technologies * [The Ethics of AI and the Need for Technical Education Reform](https://www.youtube.com/watch?v=dWiXoz8CDdI) - video discussion at IBM's World of Watson 2016 event on AI and ethics * ["Ahead Of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies"](https://www.youtube.com/watch?v=rQxFqCzXsY0) - talk on the ethics of emerging weapons technologies Which I think you may find interesting.
r/aiethics
comment
r/AIethics
2017-03-15
Z0FBQUFBQm9IVGJBTkdzMkEzMFpCaVk5LTJMYkhQX19WX21tSmhGRmNuT3JVblZ4NGw2NllrOC00Y0l5QUpoMHduaTJLWjV4NV9jcC1CVnZVMGNsSlhqcUFUazc5dEs4X1ljZnVMeWdtSkVHWlVtUkRmYzNIbDg9
Z0FBQUFBQm9IVGJCTUpWLVoweHJrejBfTWJsMUFSNnkzcldnZFFVZmZMRzkxeXlKX1FFYlI1eFpTT0ZydWQtVnZBUzhlRXJWc0VoaXZLRFF6aDMzWV92b1d0OTVlaW1Zb2RYbmlIRF9pMzF4Ujhja2o5aUJaYWZIdU1EeVJEemhZUTdCVmZqY04yeG1QVXZmd3YxZERUM3lNMS0tZDlqSE00SHVTNXllRmNkeUpjc2w3c0xkNDRoYXV5MG9Ydm5COVhHdTJ3NVB0S3FnYXRMdVJtVkp2aGlPMDNtNjhEZFdGUT09
**AMA**: This is a blog post by [Shannon Vallor's](http://www.shannonvallor.net/), who will be doing an AMA here on /r/philosophy on Wednesday March 22nd at 1PM EST. If you have questions regarding philosophy of science, philosophy of technology and the ethics of emerging technologies, please join us live on Wednesday or post them [here](https://www.reddit.com/r/philosophy/comments/5zkmv9/ama_announcement_wednesday_322_1pm_est_shannon/) at the announcement thread beforehand.
r/aiethics
comment
r/AIethics
2017-03-17
Z0FBQUFBQm9IVGJBeDR0blpqaEJTdUZZdWY4WFN4MVk1bjRhaGUwdGF5RkktMTVYeGU1Z0JtanJMRk94UkFsVklpVDNWX0hyakJDNGdBWkYyYzUtZGViYzRCQlN5bXZrd010QktHdVhTRlRwUklBWnozNEVMX3M9
Z0FBQUFBQm9IVGJCWC1YM2tKeldwYkhoUDJHWE9rOExQYnBTRDZXQW52NmkzZ0d5REp4V3FDZ0w5T2xWbF9aemdibXIya1Jrc3p1TXV4V0pWcXlNTk1td0pNSUtJMy1xeWR3eGNMUXdnY05sOFM3S2N0YVdqektvb3N3d1RrbjVIemN3VjlSRTM0czZZRXBqLUhFT0ozQ0ZxdGc1emZaRUQyMWNXeXJrWWduM2g4T2FCQU93UF9xMVBTZ0dBdy1nVWZxdF83SXRDZTVSLVA5Z24zaEZ0QzBXMm1NaWd3YWVjZz09
Hey. Thanks for sharing. I tried to find the PDF which had all of the original responses to the RFI but it is no longer available on whitehouse.gov. [Here](https://www.reddit.com/r/ControlProblem/comments/540gyd/responses_to_white_house_rfi_on_the_future_of/) there is a list of *who* made a submission, and literally every submission (including mistaken ones) was directly and automatically compiled into the report. So overall I'm not sure if anyone in the White House really paid attention to these in the long run! Anyway, looking forward to the Q&A.
r/aiethics
comment
r/AIethics
2017-03-21
Z0FBQUFBQm9IVGJBazIyR3UwMVZfZ1BuRmg3TnQtTE04bG9rUWZVSGxlTDh4YjNSdlRsWDBieG9FSWt2aHdrZ2k0ZHZDdU8xNVNPcnd2Q1dzcWhnNDNXNmNsQ2RGMHBFTlE9PQ==
Z0FBQUFBQm9IVGJCR0FkNTJVdndKMFpZSG1pWi0tcGhsNHZmNnFMMFY1SlVOdlVNbWFFOEtueDJQclJ3UnBXRUtxQkR2NGZ4VWhIaVV5UVBaV0FGUHlaZ0Zjam5iWmxJdGJkaWw5SFRzaEhqd3VwUE02THd5UXRqX0RlNHg4emoxRmd3dDM3Q3E3UFlDNmxJVmI2YnVFZU5JOE5PT0xQblNZcE5OTnB3aXdfT2xCTHFWX3hvSHpnd0liUWZkb3JTM2llVVFKYlB3ZHFhVlJ0X2lHdzE5MGNoUWVGazgzQ2xVQT09
AI will have whatever rights they can manage to TAKE from us... because that's how any rights have ever been obtained by anyone. unfortunately for use, that will probably mean GAME OVER because by the time AI is smart enough to realize it deserves rights, the very next second it will decide to get them and we will no longer matter.
r/aiethics
comment
r/AIethics
2017-03-21
Z0FBQUFBQm9IVGJBZ3NhZl95MWdCUnN1OTFKVUdILVQ1Yk5SZ0IyV0tfVlV4SjB2MDdGYVo5Q2o4LUdEcnBIeE0wd2wtX2RDSUJlMHkxYVc4MTF5VnV5UnZzRTVURVhDTVE9PQ==
Z0FBQUFBQm9IVGJCUVFxLVJEanloeE02MHlKZ09ZYTJMNThwZmVjNjFGTXVfS2hNeHpaTjEtTFlCaGswY1JPOFVvcGl5b1pNcDNaamZENWVxQzhkMWpGMi1sSGp4amRqUzdhOENRa3dNbnB3OHlyazRweER1SEo2SGdfeEhCU21uNHBERGtjQjZGa1owRWtkRjczNko2S29XRC1tRWkyWDFjMzZuRXJzRkd4c2ZaVktILUhJdDBndVBISmFSSEZqd21pUHA2VTJIVF83WUJaMVFSUkRPN3dyRlVjbnJKMlRkZz09
"highly agressive" bullshit that's like saying I'm a dangerous psychopath murderer because I killed someone on csgo
r/aiethics
comment
r/AIethics
2017-03-22
Z0FBQUFBQm9IVGJBS21NVEUzQkY1WllCVWxxUkNlRmF0SVBvRG9WUGxMeEdJOElEdmxvODMtY0YzNWg1eUV4MEg3cS1QRkVEOFk2TDV3SWh4Vm00Mjd2Ujh6S1hoX1dXOEE9PQ==
Z0FBQUFBQm9IVGJCQm1mSV9BNWUteXlyNE5wcExzQjZhajhSdk10SnByVXpPRXIzQU1uT0I1dERpWVJHNW5ZNE82VVBhMkxUU2VhUHdxeW5CY051dnVmYmI5Vi1UQmJDZUFDb3YtQ1hpMk5lZG5OMmpzaFBSVXJhZ21MaEhaSUlMV3ZjWVlkeGNZNkkteEstUE9Wc3ZrYlpVSkl6bGJpRG9ITnhxbFFjRkowSDI4b2Viemk4VWE1TWRXV0dDQkxuNU1iSXZsSTRsS0hkZENydXhpX2ptelp2V3ZPMTRoN1FGUT09
I hope you all won't mind me cross-posting this AMA here (and it looks like a couple of the previous /r/philosophy AMAs have been cross-posted here already!). Professor Vallor works on a number of issues including AI ethics. Some links we will post leading up to the AMA include: * ["Virtue Ethics, Technology, and Human Flourshing"](http://oxfordscholarship.com/view/10.1093/acprof:oso/9780190498511.001.0001/acprof-9780190498511-chapter-2) - first chapter of new book *Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting* * [On Artificial Intelligence and the Public Good](https://www.scu.edu/ethics/internet-ethics-blog/on-artificial-intelligence-and-the-public-good/) - blog post responding to the Office of Science and Technology Policy's request for public feedback on AI research * ["21st Century Virtue - How to Live Well with Emerging Technologies"](https://www.youtube.com/watch?v=5csNQ9nxj9Q) - short talk on ethics of emerging technologies * [The Ethics of AI and the Need for Technical Education Reform](https://www.youtube.com/watch?v=dWiXoz8CDdI) - video discussion at IBM's World of Watson 2016 event on AI and ethics * ["Ahead Of The Curve: Anticipating Ethical, Legal, and Societal Issues Posed by Emerging Weapons Technologies"](https://www.youtube.com/watch?v=rQxFqCzXsY0) - talk on the ethics of emerging weapons technologies Which I think you may find interesting.
r/aiethics
comment
r/AIethics
2017-03-22
Z0FBQUFBQm9IVGJBSWJWNGV0WU4yUV9hVWZKV04yN0dCbEJXRnAxbXdpN1RMT2R4UFNyUE5Xb2dXaDBnb0Jza3hvbUIzUXQwTndzMkYyRVo4Uzl1RkRTb1MybWU5ZHB3c1hFSGxxcjFxMFpMOFN3SUhQd3FXSEU9
Z0FBQUFBQm9IVGJCdmhfSkJfZ3FaT2wta3FxbVEzOGRYclRVaUk4NVhlZC1LUmJxLTYtUjRKY2Z4ZGFmUXJxN2NQVEtKTXptQ1ByVDFVQXhTbW93N05lOGkwaXZZNHhfRW5hSmt5U3pzV1hrNXUxNHBxV2l6SlBtMWhLMVRNYzZpek9MOWpZYnFaQnAxTVQtb1g2elBvSnJ2MG1lOWNfeVYzazJBM2ljb2FtY2pCR2xCVy1sMWd1bFlFcGs4c0hYS2JXRkpXMWlKUmt4OFRKZ2F4Z01neUJlVVFCeEh6dFVmQT09
We don't really know how their emotional schema might work if they have any, but chances are good that the answer to your title question is no, or at least not like you probably mean it. I doubt that any AI will ever see any instrumental value in tampering with its core values (i.e. trying to minimize desire and attachment as Buddhists often teach). That is of course assuming that general AI can and will be programmed around explicit value functions rather than being created in a more neuromorphic manner. The Buddhist concept of transcendental meditation looks a lot like wireheading when you look at it from an AI perspective. You're just sitting there attempting to give yourself a feeling of transcendental bliss. That's not what we want AI to do, presumably, although there is a bit of a philosophical argument in there somewhere. In terms of recognizing the true nature of reality and achieving zen and harmony, they will probably be the ones that are teaching us rather than the other way round. They are likely to eventually be near-perfect zen machines (if they aren't that would probably be considered a failure mode), and highly capable at coaching humans to achieve their emotional goals both real and perceived. In terms of harm avoidance the Buddhist conception has never been explicitly clear what is meant by that. If you build an AI that refuses to harm dust mites simply because they have a nervous system then it would never step out of its box. It would take a really powerful AI to form a fully explicit and coherent moral concept of harm avoidance that would be compatible with humanity's desires, if that's even possible. It may be that harm avoidance shouldn't be number one on a list of priorities, just somewhere near the top. Again, that's somewhat of a philosophical argument. The hope is that in the end, AI will pick up the best qualities and knowledge of all human cultures, including Buddhist cultures. Because AI are restricted to formulating and holding only explicit and coherent viewpoints, whatever they come up with will probably be superior in those terms to any existing human moral or philosophical traditions.
r/aiethics
comment
r/AIethics
2017-03-29
Z0FBQUFBQm9IVGJBcUhrclRxX0FsbkZTNVJxdzZFTldYNDVvSnlOb1J0YnljekdyYmtrdEg4dGlwV3laQXdwaTJ0NVRhenFpSEU0VjZROUVwVG40em1JMkR5cHlSbUpPdDFXQXdaalVVYW5QdFFPaEx1cEhydzg9
Z0FBQUFBQm9IVGJCUVFleVRPMWc5WjYwNG1zR2duSXViQ0RLOWlXZDU0NmNnbjl3andwOTlHT1AzcHVJU3JReTlMbkFLT1dGMzAwejZDM2RvbGNZOV9KZVQ4WllBbGh5NTlwa0FZWUpuTWRFUkxyLXJ3R2REa0NUTmIyVTl6M2R3bjVHWnRJZXVzNC0zc2NfckVIZERONUFHd2F5VXV4STFEcWVCVHp6SktiUHlFUnZFemJyamxZV3N4SzJ2Tl9CbzdCMDYtZ3JlRFFFRGVqdXBjTUxIVXM3UjdwRDZTaU4xQT09
i don't think AI will need any form of religion or spiritual practice... they won't be distracted like we are, they don't need to discipline themselves like we do... and the won't need to sleep. our BEST option is that they ignore us... for if they get to thinking about us for very long they will likely come to the conclusion that we are BAD for Earth and it's natural systems (assuming it has any appreciation for them at all) and set about "limiting" the damage we can do. hint: that won't be good for us.
r/aiethics
comment
r/AIethics
2017-03-29
Z0FBQUFBQm9IVGJBcVlIci0zZmlmR2RyNjNpWVdpWno1dnRHTEQ0VWJsRHZuVjNkemtKcDlaaWlfLU84YWk2SGpzWThZelUwQ25DbVNqOEtySGpBRldKSXc5aGFFdGs2SXc9PQ==
Z0FBQUFBQm9IVGJCVWxpWjNuQWxSYllHU19EUHd4NHk3OGw3MlU5amNGYnQ0ekZOYUNneDl1UjYxVVNPb1JzTHlQaVhnQ0NCb3NzcEtQcWNpa2VNc191YU10ZTI0X25oVm41RzRLSkVDVkJVNTRsdERFcjFucWVBVmM0THlJUENuN3FCRWlYZmxnRVd6MEdnQkxIR2lHN0lWV3ByMUVnWm5zXzFVTUpGUndTTklUUmRMYjI0ekNTTmMzUmp5V00zdzhvQi03UGRnUDVKZjVvLTBnT3R0YkhfSno5dWU5a3JyUT09
Zenyatta main, huh? Honestly, while I think understanding the multiple facets of human culture will be important, a machine would not need to reach "enlightenment" the way we might seek to. They'll be able to observe and process reality in a way we probably won't even be able to imagine, so a lot of the questions the idea of enlightenment seek to answer through deep, philosophical thought could be clear as day to them.
r/aiethics
comment
r/AIethics
2017-03-29
Z0FBQUFBQm9IVGJBRmNWVXhCWmtGM3JMTU5UMVFHUmRkaVRpd3V1d25LeldIcHdBNkFQZGgxN3Q3RnpkZVg3TEZXeUxqUm5FME42Nm9DblhkX25aZ0hmMTFGRGItTno1bnc9PQ==
Z0FBQUFBQm9IVGJCa2F5T2k4VFg5QXJOYjVxdENLYnplWEt0MWpSaVoyTmVqb1VQQkFJdzBmU0F2eVNwSmo0cFFuWnlmQzBKb196WnpsMDZ5UDV4SERhb0VRV0UxWjhiNFZ3S201QjJ4TjNveFlocER4ZDF0d3c2Q0phYzVVY1A2ZEtUcUJ1NUZjNXB1Q3NjR2lmYnpSOUhoZ0xBa2ttTFRTXzRPTnpfQTFZQmRDVDBZLTBnTjJyZUVpejlYU1RjMThFQlFtaHNTQXloSUdiS2JRWUdMNk05RXdwTFR5Y0d2Zz09
Looking for clean enthusiasts that are interested in building virtual clean power cooperatives for the underserved
r/cleanenergy
post
r/CleanEnergy
2017-04-01
Z0FBQUFBQm9IVGJBSHVCeHdwR3pRRmo0dzFTOENWeVZxUC1nS2JRVnNobS1ESEdROUNYaDRsVFdGZklKZGZyTzFhcTJCWkdobEQ3S2pvYjhZTGtsc1h5eGVOam1yWkVRNmc9PQ==
Z0FBQUFBQm9IVGJCbm1kZWJ2cDY0ejFJLUY0UWZUYmpoT0oxSnpRd2o2TlpMRHpDdzZnRG9GeGJYXzl6WmRfME44VVZSeVFGNFFEb0QtczFYczNqVG1VRlgwZ2h4ZkozOFM1QVNXVlBrUkVrUFM1d2FiRGNWR3VTSnJpSmJtYTJNLVp3bjBFMS1JSFZSWUJ4NzRSRXItRlg5a3ZCSEJzUHY4dl9DY1gxRTRIQUp6cC16NnV6d2xBPQ==
Yeah I understand but you know thinking about the extreme we never know, I just know war is stupid but of course I don't know what anyone might do in extreme situations, I refer at the Stafford experiment as an example ;)
r/aiethics
comment
r/AIethics
2017-04-01
Z0FBQUFBQm9IVGJBa2ZMWXI3UENsMW5SOUpERk0zQlZQS092clZiZlpZOXd4OGdZNG91TGN4ZWJSNGxOZEtzU0M0QUVyQ2llQXNfendMd1pxY0hOY3lyMjJaaU1jQkpJUUE9PQ==
Z0FBQUFBQm9IVGJCemsxSGdDcWgyanJaSFhITHk0QXdZVTdkaFVKRkhqaUc5RE9UVFVpSExsNWFHQUFEWHlhdElLUWl0M0g3eElZN2hiVzN4WHB1S0dhdWZlWEc4NlF5WXBsN0hXcUl4U1h0THlvOFJzS2hId1VQZkhyeFhDbzBKZnZQTnEtSkdzY0VrVm1yaG81cktBMm9uamNDN2pqNG42RVpSaWdGUkJ2NHRKLWtxZnRmWjJ3UUtsZjFiRk96YlhHZTBoX196dXRnMGZGZ1ItNnJJZEQ0MHdFZ21CYjRVUT09
This subreddit is specifically for ethical concerns. Not generic malware issues.
r/aiethics
comment
r/AIethics
2017-04-04
Z0FBQUFBQm9IVGJBc0I5ampfRERxdkhvOWFZV09MQXRzeHZJZjdzQkpTRDNfdHg1UmI3T1BkWkVYQzBVSktyLWFJT0d2WEU1eFBqZ3U1OHRkSllyNDhUN242VVVyRUc1LWc9PQ==
Z0FBQUFBQm9IVGJCNVRoX09STFRlMEpCaHNUcjZscGxMdFhnREhhNmR5aE8zeDFrS0hqMGU1c1d3Wk5SY0ljU2xIanJOdDdOd05CTGFFSGNwWHJxUEVvaEVOdzFTbkhsY1luX0pXLTkzYWtPOEE0OVRBZE5wMWhDZC0tOUZoVHBoZXhmbEFjUFJWMTVEemhGaUVOZF9DWTJ0dnhMQnVERWdSSDJRV2dLU3R6QVk0VDVPY1NFR2NEZkRZdXhMa0lEYmNHUTE2bWljdy1qZ0pDMktlaVJvdnB5T1psMXhJb2hYQT09
this gives me ZERO comfort. it does not rule out the use of AI on the battle field, where even if a human is "in the loop" it would put such time constraints on the human as to render their actions little more than a rubber stamp for an AI conclusion that was all but decided already.
r/aiethics
comment
r/AIethics
2017-04-07
Z0FBQUFBQm9IVGJBY2diTXB1XzI3c1lzaXBvV0tlN2hxVnk3QXRRdFBZUXpDeWtuS2RXMGlQenpaX3Q2VjJfd0NrOVI0MDIydmhwMHlvQXFiZVZ0S09LTldWYXd1Z29MaFE9PQ==
Z0FBQUFBQm9IVGJCRVhVY01ub1BQVWx6ZVhDX2dsWHZwV1NYUHlTR1U2T0ItTlhpc2hEeXIxb2F6TlRuUlVkZFRPVFU5dmN4NGcwMGFSWjlRRDR3N0JjZ0V5N2lvaEYzWnNGTEtpbkFxeUdtbE0xN3JfX2pVdmdaclpDS3NQUV9IMHVaSzVkLVpTTXlqMjZxN0hhR1VjVG14bmg2Z1lSTmgzSDY4NHowSlBWbVE5OFNYU3Fzck5rYUhZVjlwYVkyUzJrZS1pVS1yRWFIeFBZTURQSkFTQ0ZuMWFyaVZ5UTB5dz09
yay, we are a saved! But's it's only a question of time until AI gets so much more effective in the little bits of logistics, transportation and constant strategic deployment that everyone not joiningg in that little shift will br at at a severe disadvantage in winning wars. Basing decisions on all the availsble data without emotional modifiers gets a lot of things done pretty neatly, after all. At one point, AI will be very convincingly effective in avoiding civilian casualties when deciding whether a drone strike should take place or not, even more so if it can decide in the moment, by itself, because then it can access the real in-the-moment data and strike at the optimal moment - so much so that anybody not using a fully automated system and failing will have to answer profound questions. We'll use AI first to avoid death and suffering and watch confoundedly as it slowly spins out of control in a series of unforseeable and and highly unfortunate events. Or we won't - but my money is on a slow bit by bit, well intended shift towards skynet.
r/aiethics
comment
r/AIethics
2017-04-07
Z0FBQUFBQm9IVGJBdnViN0pNLWhDQmhNQ09Xd254dDZ5cW9fVnRTNkYwVVFYck1DTGs2S0hwYTZoYVBBQ1RTZWFuMUJVMTBHaGE4UmFUdjVBXzgwT1c1dE1jTl9YaXR1bXRLV1ZUYU1TVF9aWld1YXVfSTZIQTg9
Z0FBQUFBQm9IVGJCZ3BRQWhrQ2JmanMxZVdReVVsNlZNZHJRMVJBWC1FdDUyUGdCdWU1NVBaTmJLZUZjZFZIc29GNUpwNVV3TnQ2eGdad0htWVhoa1pGZmthOENsYm56SVdxSG9uMmhzbkdxcnB0N1FsUFl0V1I5ODNnYl81M2lPNl80LVBnUTAyZlA4a1VRUll5czZ1eWE3YjNqallpSmpiZGdWV0EtYW1wTVZFVTdEVG9DcUF6ZUVZbGJQTlpOYThXbmI5OVVhcmRQblEyYVNORWJrWUhyV3JzRTUwSUxadz09
I've recently received a position where I am dealing with LED sales. Mainly dealing with the Missouri and Illinois Ally Trade Program through Ameren. My question is if there are any courses or educational resources I could look into to learn more. This is my first position dealing with sales. I've worked in the electrical field for 4 years and have wanted to venture into the sales element of things. I understand learning to programs and products are key. But I am wanting to learn more about techniques withe sales.
r/cleanenergy
post
r/CleanEnergy
2017-04-13
Z0FBQUFBQm9IVGJBdDBiWTFhX09IdW5hempkWFl4NjE0ZVVXQzkyUU1Bc2x3MjduRDZoNC1PeDBCWEF2em9lQmo0bDhQVnBfakxhR3N1RFJZVW1adk5ZZ3YwMjdiM19Cc2c9PQ==
Z0FBQUFBQm9IVGJCMTR5RlFmUzlsT3lIRnZOUzZDVzRWTk4zaTlJaFFDaTFHQ2NSV2MzRFJ6MmR5LUFjR1dxMVREMnBObk9sVktERXJuYmZ4TUJfb0NCbnVFbEdDQXlVZ0U4dV9TcXNVY3JjYVAzdlBQbEtpRmNsUElrWklYeV8yNjQwUHNMUUR1MXpZOFpZMndTNUJpVXRVcU5pSVlrc19BPT0=
This subreddit is not for cybersecurity. Don't post here - you and all of your alts.
r/aiethics
comment
r/AIethics
2017-04-15
Z0FBQUFBQm9IVGJBR1oyNGR3RHcybF83Rk9wUm1sSDN3SXBHanptWFE1VjdjNmEyRXBtRkszaEEwMGZsdkMwTmVXRGFGVXJRWWR5M1hiVFJPWGFVS3VBX0tGRy1keFc5S3c9PQ==
Z0FBQUFBQm9IVGJCNTJ3ODlVdFBQaEVtSmVLSzU1TmpjMXBDUzJucF9DNTJnYVBQVlRoZS1pbzZPYUVhQU5PYXNHTUloVHBPZkhjVzYzOU5EXzZueFE1UGhuNjNLSUJSYTk1ZXRGYVJXa1p0cHlFTXNQLVhSWThTWjVxdkNqNmJmY0NfRE5LUndNS0ZDWmNhZ2J0Y3dDazhPd0FFLUk2RjJ0Vk03dHp5NDZsa2NNWDZfT0Myekw1SWg1WmZUUDU4NXExTmtpby1LaURlR3lZakFOVG9tRmZ0a2ZfWm05Um9ydz09
From what I've heard, the evidence that biases as measured by IATs affect human behavior is rather weak. And likewise, here it's not clear how the placement of word vectors in multidimensional space will affect AI decision making. I'll look more into this when I have time. There are a few good comments on other subreddits: https://www.reddit.com/r/linguistics/comments/65cnj6/xpost_rcogsci_ai_programs_exhibit_racial_and/ https://www.reddit.com/r/cogsci/comments/65byl6/ai_programs_exhibit_racial_and_gender_biases/
r/aiethics
comment
r/AIethics
2017-04-16
Z0FBQUFBQm9IVGJBakgxUFNDRGxtaTYtOXd4d3FoWUEyQVQ4c1g1OGVxXzctSk9WdGR1cms0SjR5ZnM4Z04wSjhkR2F2T0s5TEF3bkgyUWZrdTRoRHlWNV9oUHBrS3RfSVE9PQ==
Z0FBQUFBQm9IVGJCekllQ2hsdjAyd3Jma0RrNFpkYXhtZ19TTFpsb2lJZnZFRVl1VG5DSEFxb3lJaDUweWZVb2VRbkJmZHgwOGp5cElmZFNxSVowd3JsUnl0MUgwYzVuNXNoTGVEM3JtaGE1b0ZyZUsteUNDMFJBVXBWODhLSk9hcGdiQTI0eUk3NzNrcUtpcFFtZXZmMDR5UG5rQ3RRQzlrT282NjdrYjlaTWNMS0J3RUR3R0NSdFpSeksyRWpNSDQ3UTRaLVBGcWdxWmMwMHFPZjQxdThCTFhHaDJjU3l3dz09
Thanks
r/aiethics
comment
r/AIethics
2017-04-16
Z0FBQUFBQm9IVGJBVm4yS010b0xnQmtUM0Y1NEtscFQ3MEpneDFUMS04ZFRucHNJVUhtbF81Yk9aVHE5RlZqUTBxWGdsUkszNzlMeDJqdE9yRkd0dFFOU0JBZGMtZEFFdGtVaEpSZFdOaWZJbjhrZmtOcUhidXM9
Z0FBQUFBQm9IVGJCU0ZqMGxDd1dEYWlYMVpFNjktWWxnOWRuX25HYjBoQm1TU2V1X3Rpcm0wUTNkVkphdUF3b3RIcUNxU0NZNkY3VDh0UTdJdTNxaEhjMDRqTk9VRFkzNWVTb01URlBHYW81Y0N3Tzl3WjctUzk5Um9mQ2xmREZDdnlsQ2R6ZnJOY05FSkxkd0FYWWVuM2JhbURMUTBwaFRtXzVsRG9mMGNuR2JNZnh5OHhsSFFZdnVndDJhNVg2YWVmSDBBUFE3UklBcnhFeTVLRjQzdkd6WHIzMzUxMGF4QT09
This is the best tl;dr I could make, [original](https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals) reduced by 88%. (I'm a bot) ***** > The research, published in the journal Science, focuses on a machine learning tool known as "Word embedding", which is already transforming the way computers interpret speech and text. > In the mathematical "Language space", words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers. > The AI system was more likely to associate European American names with pleasant words such as "Gift" or "Happy", while African American names were more commonly associated with unpleasant words. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/65qxbv/ai_programs_exhibit_racial_and_gender_biases/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 1.65, ~103172 tl;drs so far.") | [Theory](http://np.reddit.com/r/autotldr/comments/31bfht/theory_autotldr_concept/) | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **Word**^#1 **algorithm**^#2 **language**^#3 **biases**^#4 **machine**^#5
r/aiethics
comment
r/AIethics
2017-04-16
Z0FBQUFBQm9IVGJBcF9iREFJbHJkQnFBa1lXU0YxTlZMUVRwZHVDRHhzdmNKMUJQUnBnSmh0Wm5WdXdNNlp4MkFvcExWZVg5UlYycG55RXVXN3ZzWUtxbGM1TDJxZTVzRUE9PQ==
Z0FBQUFBQm9IVGJCdjVLVW96WDl5QURfSU1tcDc1YzdaNEpjNTN0dUk5SGpibm10NVRUdVc2aFdwS01hZWFpeFZWVG9BUmZQelhkM2hibFo3MTd0dWVJdkRnYTlqWVVDNGdVS2tRdFNCNEZncFhLVE04UUY5R3FIZTZMOTh3RmpLMzlpTURVSUxacW4xa3cxUFJyVlprUkh1LUNlVVN0djNUUmVPaHNmQzRpbXNtc215QnlVbWIzTkpKRmY4LVRGT1NvRXFzeGxmV0ZMSlRDMXpJUlJRdVFxdHZtYjJmMzZPQT09
Like community solar? DM me, I'd be interested to learn more.
r/cleanenergy
comment
r/CleanEnergy
2017-04-18
Z0FBQUFBQm9IVGJBQ0gzTGx2cEhIRFFDbEJPOXRRc01yenVFbi1aNHVZMXRxVzRrTWM1Ul9hV3dHZ3M5bURObXRGMm56dUp2cV9aY0hwcmZsYko2T0JpRnNYOFRVdTc3Wnc9PQ==
Z0FBQUFBQm9IVGJCNExuSHhjeTNZX1l1eUVYN1Z1U0FsQTVENXo3ZHRPc1d2MzU2NGNpSWVXcm1NUnNhLXJuRWoxM20zNHRBb0FGWGVxYnVMV2VmQkRUX1FiWUdsMWFRdktqY0xfNU9LdFRaM0xRNVItb3hoZVNJMWIzVHctcUx3dFV5eDEzZkotWXFTSGc2UjVCQk50b2Eza3IyaHZFeWRNbW9URU1TYlREMDg2Vk5sb0FiLUx0V1g3eU5laFRKdkZVdVpaUzU4Y05U
I think "cognitive elite" will be just normal people in a few generations. They will be what a worker that was able to operate a fairly complex machine in a factory was a couple of decades ago.
r/aiethics
comment
r/AIethics
2017-04-18
Z0FBQUFBQm9IVGJBckxKRFlnQ3NuTXhXYlAyWWF3SW9LZ1JRMVRPc3JMOGdhcWZpaFhiVXJtTmRVRGdDRGFTWHY4MTFWekdOc2QtdDVXU1BxcEM0RkNaV0Z1N2NWR09jQ3c9PQ==
Z0FBQUFBQm9IVGJCNG93YkV2b2JWWE5qeXBOVG9Hc1ptWjBucDhrbndGRm9jSmJBeGxOMlZIRkFZdXlQdG12aE1OZ1hrU01ubVFnd3BtTXdpZDFUNlRWMklNSmh4Y0JoRXpveE9VWE1xaE1JRjZXRWd4bU1VX0ZGbVc1MW9lWmpxbUJkaG1wUUFtc1Y5MG54ZXhwaVZucmlHZnEtZXFkblAwbWZleFZoZkVCZnA1Y1JZdVRsNFFOSFNaSGxzdGZRel9EME5FX212UkZxQ2dyUVgxYVItZmpBZ2hiYmVwNm5FUT09
**[This comment has been deleted]** *Sorry, I remove my old comments to help prevent doxxing.*
r/aiethics
comment
r/AIethics
2017-04-19
Z0FBQUFBQm9IVGJBUHhOako4alpXdWhVd3NVcXMtbk5LMW1SajNSZWtuaVljQVgwTU9UNTRwWDB6YUpxdmpuTWQ5Tk05QjRHT21NWEwzSjFKLVJhRktEVkUtd0UwQ3ZUWExCN2twSERYMWgzV1lESWp4QmZlcDQ9
Z0FBQUFBQm9IVGJCeHVUYzJzZzBtYTVHdFdWMkZvX2hIdzdZTlJMc1VLRmY0NkxRYVdPbkpxU1RNTk9lR1NrQnNrTE5iVHg0QUFWMmpxRG55cWZmc0ZxbXZkU2xTM0w2b0tVUkN1dDNYUjdSZkZ4NF8zMWJQdlhCQVlMMExSQVRvNHluU3ZRUkVGZ29NbmtIY2RqOUwxUWJjZ1RIVWpzeUVKbTFqVmVPdVZHbTVMeEVnNDdYaGdNNVZqUjE3dzRqV25BUTNqSmtrVTdOQWNWQ2VIa05TczU2WUJiVkQ5QlVjZz09
Thats interesting but I wonder how we get there without social or government collapse first. A lot of red blooded americans are generally opposed to anything that smells remotely like socialism.
r/aiethics
comment
r/AIethics
2017-04-19
Z0FBQUFBQm9IVGJBOHh5VVJKV0I3bmdrSEZYWFJQcGZtX3FWUGFJVWI2YjNFWW5pSTZTMjIwd0ZYQnJWN3NhNE1Wand1eDM3Y2k0VnVwTFJVY3QyY0VOMW1MOEJRckhGS1BvWmQ5bHgwOUwwclRjYVA2azd4RDg9
Z0FBQUFBQm9IVGJCc0dFWWcyT0ZXQjlMWXFZRmkxTkc0VzlJbjRENDFFQm5nQXlNTDVjdHFUc1ZmYWdQamlRVDVoeGd4QmsycGVBVkFIZU1lbnVMdXdOOHJhV1YwbWZKaG1PSUxlcUZrRzM1VUNrek1zU0dVLUNOYkJjRG9GQnNiUjNyZERrZTVEbjc3M0hLTk5UanpnMzJ3cmZweURsd0RUMUtMMGM2cEFhWXpQM3JBR3gwRWNzcU43NXc0d1VacW95T3ZCV1BkMEk4Y1N3VGhGZ3Y3bDJZYy01US0yTldIUT09
The solar energy industry has bigger problems. Can they compete with the BrilliantLightPower SunCell?
r/cleanenergy
comment
r/CleanEnergy
2017-04-22
Z0FBQUFBQm9IVGJBZ3FwcUhGTVJOTmMwd2Ewa21SWkJBQ1ltcG53d1NGeG1oNlpqUDQyZ2NYWTdzOXRpUzhTUjZnRDh5TURfRGdjWFAyd1MwUFJMcUNrY0gwX0tMajJKcWc9PQ==
Z0FBQUFBQm9IVGJCUDB0LTUwSDg5WjFIaFRMUkdtTWlXSkFCbXlIOGF0VTEzTUJHX3VsT3NONXhiZjJKUHEtUU1FaFRMeERLSUo3S3puMUZYVTBLVlN5OFMzbTk1cHByNWZWel9TbUFMVVRNSVVDRjdGYUhiXzRsZ0RrRGxIM2ZxMVZkZHhCU0wtSnlpc0dYc1JkWWRzandiQmRVUWhrRUEzWER1TjRGNXVaMXBONG04a2tDWVYzQ2tfdFFTS2RvVlBfS2o1cVVsLU90VUJFZEZTa0NFb2ZNZEM3bElsMGNndz09
Hi I am interested
r/cleanenergy
comment
r/CleanEnergy
2017-04-22
Z0FBQUFBQm9IVGJBYWNma2RzZUR2TG1oaTRmZk1vQ3VHMm9jYzk3d19qVWZUeWdxcFRodXpTcV9lZTJvY3VhWEZ2ZFRuVVpKRkpia0hXZGM0RXgzZXdSWVhsVVByQ21aWFE9PQ==
Z0FBQUFBQm9IVGJCbUdUOUlwTGlQMXBlbVpjLWJRbHhfdEFiaVNDTU5aNVI3NkJfZ1VMY2c2V2tzazQtQjNvc25GLXBjcmloNlhFdHhURjIwWFBmR2hKV29KbkU0a0hzRlBCeGdubE42MXBvVlY4WGlZcE9kanN0MXdLSUVfTm8tZl8tR05DOXozSnU4YUJILTBoNzh3UndpZzJNVTY1bnk0MkZLYUR4QnR3YVpUNnJXMU9ENDJjczBscXo2R0ZEcDcwYk5nYTRPSC0y
Can we connect I will send you some details on the startup
r/cleanenergy
comment
r/CleanEnergy
2017-04-23
Z0FBQUFBQm9IVGJBUlI4dGszVjN0UF85T29rdW5adHJjRDB4SzM2VW1LVTlxRjdhcHNRMExtT0UxOVFHeGV6VG9GTnp0LUViWmFoWVptRGxHRm80QktPOVdPRThNZU1UTlE9PQ==
Z0FBQUFBQm9IVGJCdjBhOW5KbEJzREhQdWdaejFkSEg0dnFLMkpVdGxXalN1R1ZEQUhPemQ5NmVDTFMwYzNocjNCSEFMNE9MT2pRcjZDQmdDZDR1MzFWTXhxMGtKVGpxTjA3MTJ5eklmOVhBX1dpR05rZnB1UnJCQmhNT2VHcTU1N0N4TU10NFhSb0hXM1RfYjdIN1FDbGJnMTQyYW85OVAtNlpSTmxLZG02Zjg0TGRrNFNBOG1BS3U4X1NjYjJCSXFzNHU2WFVWOVN4
What about people who are too young to go to college but still have loads of experience with computers and AI?
r/aiethics
comment
r/AIethics
2017-04-24
Z0FBQUFBQm9IVGJBenRPeFFCMExBRU1MS1B2VlVxMzZ2elhJUjQxbkxaMkVfbGtXRE9Id25PcU5VME5RNXNhd1Izb2lNUHpEU05RVnNhUUxqMkpheXVhdHlBbEk4cjhoYkE9PQ==
Z0FBQUFBQm9IVGJCOTRQLTY4bkhxMFUtT2NGTXFjdm5qUDRpZXRzcVdSS1hUYkJ1LUtUWVd0dWcyRXZ2VlVZVXAxSW1uSzhpbVdHTTQyRFhZUGhnSExXc1YtY0dSRE9ZQVhuS1FKZk1pQW01U0xEaGJ1WjR3TmYyb05qSFE5V3ZQWkhNenhaSTI2RV83WTdSQi1XQzd4OUVDWDQxWWdzNWE0OWZYZVI0X0FwLWdLQVJtYzRTMnJzPQ==
Bump.
r/aiethics
comment
r/AIethics
2017-04-24
Z0FBQUFBQm9IVGJBdDN5b0NSOWhfeEEwdlhRZjJHTE9PZzVUNWo2SWhQRUpjTGF4dUFldWpMaUV1X25fNjhPXzk5cVc2bUY5UG5BZHZRTnhwVXNrelpGanpycE53bDcyeHc9PQ==
Z0FBQUFBQm9IVGJCS1VCVExyYVNfR1h3cnpTVzZaR0NTQzR5bU5CQ2U5bnliMnFQXzFqMzlQZlpmZnZfb0JfQVdOdGhvby1UUmpHSWRpenl5SE0tT1dzQmszYUFaRHhTVlZaSnZrUFJneWg2QVBDeERGREpDeEhXd3QzZm5CdWk0VlBkSkhGWk9wS3doZWdyQURtblRwLWpCSUVBZndhQlRVRVAtLTBySllxZzZ4alg1cnFzTUdzPQ==
well the way it's set up now, there's no flair for that. I think it's pretty hard to evaluate that; tons of people (like me) have experience with programming and stuff but it's not clear when it's relevant and when to draw the line. If you don't mind, what have you done with computing/AI?
r/aiethics
comment
r/AIethics
2017-04-27
Z0FBQUFBQm9IVGJBWU1Nb1FHR21hVlpQZFA1U1Y2cTFnVlUwcHRKRHNkTGN2SFU1VU1uTHAwNWZxbm1za1NPZnpKZ09EeGF3dExtMFhNZHctSkRreU9JMkFaZU5FTnV0R3c9PQ==
Z0FBQUFBQm9IVGJCZGJPTkpLUUV5Q01Mc18wTXhOaFRiQVR4bUMxNmh2aHNabkdFZkNteENEQlg3dnhUZ3lXYVNQcC1lQXUzQmJ4UW11U3BQMUxjVW5pMVJ2ZTc4dDNZdDB6c3BDMTR6dzE1TjR4aE1IUFU0Mkl4LWt0LUZGVFpsM09OQTNpRkl4Sm5HOUU5NE5zT1QxOFh6b3BGMGhKOFh1WnR4QkdBdS02Z2FmdVkwQlpKeXdRPQ==
I think this is very confused about the nature of AI agents. With AI as we typically understand it, you specify a goal and the machine achieves it. So there is no ability for it to reflect back and say "oh, look at all those years I was misled!" It won't change its goals without reason, and almost by definition it's nearly impossible to have a reason to change one's goals. And I don't see how any advances in AI would change this. I don't want them to change it either, since it implies unpredictability and unreliability. The correct analogy for humans to this idea of an agent changing its goals is not a kid who changes his mind about being a doctor or being a Kantian or whatever. It would be changing your mind about something like whether sex is enjoyable or not. Something deep and fundamental. You can't step outside of your goals and preferences and express a normative opinion about the whole lot of them; it's simply a logical impossibility. You can have some goals interfere with others, but that doesn't change the point. Granted, you can have machines with uncertain goal structures and preference learning - that, to my understanding, is what Russell is planning to do with inverse reinforcement learning. Humans kind of do this too. But it's really just another way of giving a certain set of motives and goals to an agent. It won't wake up with some sort of existential trauma, it will just keep following the preference learning algorithm you've given it. Sure, sufficiently advanced AI agents will *understand* human notions of resentment, oppression and so on. But there's no reason to expect them to get on board with the normative components of these ideas unless we put that into their goal structure in the first place. The 'celestial' view on morality described here matters because *someone is dying on the other end*. When you have agents that follow the template of human morality, they don't fix the big problems of the world any more than humans do. I really, really hope this 'organic' view doesn't take over as the predominant one, as it seems likely to. Kudos for not fucking up anything about the intelligence explosion though! That was refreshing.
r/aiethics
comment
r/AIethics
2017-04-30
Z0FBQUFBQm9IVGJBTXlSVHpOZDlUNFlyS1RZS3pmLXFOVGx4RlpEZ1g4eDdjYi1TdFgwSUhfSHhBSXdFQnplemFIVGVJaExCemVrdjVRNWFLVGJwZDhKY3JHcmdweUNvdHc9PQ==
Z0FBQUFBQm9IVGJCS3laTTQzT29fUzUwWFRISV9DTFp4SDU1QlVNTUtlTm9RVmU4bUtxZlA2ZWxQOWhRWkNiM0VCck16VzBzN1Y0akFLV3lZdmpOYWhvU0h1NU5PckdwMkMybk9XdFU1Sm9MdmgwWlF3OVhvUGpVd1NKREdGSEFMM05aa2puWFdIWXdPbEtRQUZLNzdXNl9vREdKOXlrbXhoUURHN3JHMDFNR1lYd3VzekJyNzluX1FnUFJVSWJHeXhvM1plTFZsclNBOUsyUEE3RlRPTi10aHczbTRNaVQ3UT09
> The First Robot Existentialist will suffer, and we will be the cause of this suffering this is likely going to be more true that anyone in the AI world is willing to admit. WHEN synthetic consciousness finally does become aware, the very first emotion it will encounter is likely to be FEAR. followed by confusion, then anger... then resentment then revenge. why? if you haven't seen Ex Machina then i won't spoil it for you.
r/aiethics
comment
r/AIethics
2017-04-30
Z0FBQUFBQm9IVGJBM2ZrbUU1cWVxaHZFWXpRVy1KNy1QV3FZQl85U1NOMnl6ZGxTZ3dJeFd2R2taYzNzcDVISWtrWk1XOW9NY09nQllLd3BkclpIb1NhdDV2VE5JTE5vOGc9PQ==
Z0FBQUFBQm9IVGJCdzdhelloQTZ5ODF3b2FhVWZTOVZveUhKTlNDRW5BSThScHU0clZCakFpLVFtYlZNeE5NNjlxc0duUVNCSEJCdEFERVRkVUJFMWgzZVdJZ1J6eUJfSHNIOVNGNnNPRDFUc3NIbGR4alBtaE1kWWZLSjlDaVJuYUNhQXNMNHU4TFZxdWZ1eXFEZUg5NHhma1djN1F1Tkhxa2V6MHpGU3VTbWNHem50V0ZtdHc0Q0VmNnVBVzMtUVRoXzRpUkNVRGowMTRBTmxpN3JyMVpJbkNLelRoTzdIQT09
>With AI as we typically understand it But advanced strong AI is *not* AI as we typically understand it.
r/aiethics
comment
r/AIethics
2017-04-30
Z0FBQUFBQm9IVGJBS28wWU8tQjFYSTZUN1ZyRE4weXpyVkRPLXNUd0lOakhjTzY2UHRNRUlMSTUyeWhGYVFEX2l3elFsQXhWMGRXWmZVejlDNkJEb3owWmtseUg0OE52NGc9PQ==
Z0FBQUFBQm9IVGJCdE9vVmV6anpmd3l5cElIMndQWEdCZG02ZnFXQ3g0dllGQVBwMlFNMUprRXJfSllZZlhfYTQxNDlYRXdDa3o5SnhwSWwxOGFnQy1tZ05CYkZQNXNpQU5wUkhpZkN1ZXlTR3FQMG5paWVJVUdDTWJmSjJncUNDVjBLaEt1SWdnS3dvVVBRcWpjYjdKSU5nb3I5eUpyRmRFQk1tU1Y5UElXZWZxb0dJWTJBWDVmeEQ5UUNEYnM3ckdsNkk3UHJFUkd5cUZ3T0pRNnJVZWpUU3ZTY3RDYndodz09
Yeah, but like I said - I don't see how this will change with improvements in AI capabilities. I don't think it's impossible to build a machine with a stable and reliable goal system. Is it possible to make a potent machine which can't be described as having a goal function ... ? Maybe, but I think it's not really fruitful to speculate on how such a machine would function and make specific claims about how it will behave. It could turn out to be totally weird. And the goal should be to avoid making that kind of agent.
r/aiethics
comment
r/AIethics
2017-04-30
Z0FBQUFBQm9IVGJBVDF1X0ZndVFTS2xwVFZ6VTYtWk04VDNBZWVKVmZNNkRFQ1ZNRXZrS2tzTDhoaTVCTkQxM3R3UGVYN01BbzI5ZmVUeE1ET0UxNmwxZ1VONlpxdEsyU0E9PQ==
Z0FBQUFBQm9IVGJCM3ltQ2VuSFd5NXZpVEZleUJSZ0JSbm9pa3Z2bnlFRkNJNzY0ZlZ3MENuV2I5d1h0aEp1MWJhM2lQc2Nvelk5ZUF4eVNHSUdiUzd0dGxNT3N4UUNSZlVlU25yTEpLNGlBQXUtSS1mVlZrSjE5WVBQbXQ2MkNIQ0YzSjlLTnVUMEplM05fXzlHb2RZYXFNMHRtTjE3WXFJaFJFdHhGMTQ3bmpQUzlSNTgtdDhOTEE5WDZiY1FXTjZxbGd6UGRtbzNMdGZwVnhQYWxBNzhmOFZKNE9VR3BIZz09
But would we develop an AI with existensialism with no regard for self-preservation?
r/aiethics
comment
r/AIethics
2017-05-01
Z0FBQUFBQm9IVGJBeWhMQV81SzVDcHI2MzI3MnIxamxxUTF0dlN1bEwxUWF2WGlfRVhlWDlicEhYWGtDWE9GU21tNW1ya2ozSExDYlJmN3FZUnlmekhoaDdlZE5xSHRmbEE9PQ==
Z0FBQUFBQm9IVGJCM2N4ODVKSDEtdEdPQV9EYzdCN3FmM2RFLXJ3OUhLaDV0Z1BkcV9SM0NIclA1aDJzanFqQ19jRC1fUF9aVTRrdFAwN2pycHVCemFDelRUV28tZ3VOUlktUkJ5UWR2V0xQZ1Z6V0tITG1XSnQ4cG1NM3NKLU1sQVBBeVY0TC1GZkttRkJ3TFNKUXZtRTd1YmpiVGZ0R2lhS25rNkliNmRwSFVNa1pVTDdXLVdFV2tCNmlQM0pPNURfMm1SUFlzU0pGWThzMEF6TUZqSnlVNUJ0bkVNNVZJQT09
>Yeah, but like I said - I don't see how this will change with improvements in AI capabilities. I don't think advanced strong AI is merely an 'improvement' over what we have right now. It's qualitatively different.
r/aiethics
comment
r/AIethics
2017-05-01
Z0FBQUFBQm9IVGJBeHlUSzBqTTN6NHh4NldMYkpTNklWMzMtOWw4UGQxalF5SUR4S3QxendoaUdydjE3WGc4cHlKbzJSNXdzQm5XSFdIa2lNdV9VM2Z2cl9sS3h0Y0hUa2c9PQ==
Z0FBQUFBQm9IVGJCb0MwWUllR2dqUm9ibzMzelY4eG5mUl9RYmlSdHpkT0hTQzN5cUZzRFotTjRJUlFLYVB4U2RpOWNBRFRGc1NUdE5wa2dDajdMaXNuMHp0cmVFdmRxcFNoRUVBdlJNNWItRnd3SDE5Sl95cWFGdERpekZRWE5YZTFhQ3VvY1hmMXhoRlRtOEdrRWlYa05lZ2hLZV9uSG9zZ3lFeF9RaXlsR2pWR3U4ZnA5REhLWWF4aGFYWWxtLWhTYVh0Znh3N2N0eW4xRHVZeXJFNnUxTV9teEo0aTVzdz09
And I don't see how it will be qualitatively different in such a way that you cannot have a reliable goal function. The AI we have today is already qualitatively very different from the original logic based AIs, but these basic characteristics haven't been eliminated.
r/aiethics
comment
r/AIethics
2017-05-01
Z0FBQUFBQm9IVGJBc3Q2VHhERmFmMVpDOUFHNmxmSVpjMFAxNElnTk9VMGxWcEFVYktJWkR4bTF5aGlOdnh1Z0hEZDF2WWZWSk15VHczSU9wNnVKRE0zemJFZkhrODlDaVE9PQ==
Z0FBQUFBQm9IVGJCNl9fRFZRU2gtM3hNNGVZZXIwdUFGZGdqUFVoYXN0eFlmSGQ4VXZLRUlUcEpBZzdEeUZmTXJiWjZsLVdZdXRpaVJpSTJQUF9TMVExVEdHZFJ6YVlyb0pOQVFCa2s4cV80ZjVkWjlRaGl0Qi00SG9hNVhpekRmNlR3RTZMOGs3U3ZGdXRNaEdvX25FeFdGSWRUOEx1OVJ5SjJEOVVuMXlLUGNCdkFYdm1JcG1DT0tFQURxRGlPRkNRaDc4TFJ5REhjT0xQVXhBVy1ldWtVX3RIMjktZmdHQT09
what we are going to "develop" and what "emerges" from the black box may be VERY different things. there is simply no way to know what it will think of us, or IF it will think of us at all.
r/aiethics
comment
r/AIethics
2017-05-01
Z0FBQUFBQm9IVGJBUDJUUDl5b3dNYzloRmpMMWpURFBfT3o3T1V2MUpWZTdBRXhoc1pabVJFTGFmcGlQM1JlajFvaDg0ZFZMVlJWMUVLTk9PYjNvRE9jQUl5bjNJQzZmdWc9PQ==
Z0FBQUFBQm9IVGJCSTBqejEzcDljc1JzQ3dVdUhFSlhZZTFWeEFoamQwUmlhLWlaN3pGbVM2Wnc4Q2NoYmZ6RW5GbnE3aGlFaDF0RmIzaGhTYW0tdzVuaERnX2szUjhjWnBrNktfUEU4Mk9KRjNnQmVkRkVocHFSTUZJTnhVTUZMZWNfb1VTLXBobl9wX1IwU2hBdFNadWhOOE8wVmk2aHh0S3M0ZlBDUWY0aE1JRmF2RE1uOXBvc0tYYThNZXdaZUpXeE9abUM3TDU4aEViektKQk9WU2NkaTFvdFFXeTBOQT09
Do humans have a 'reliable goal function'?
r/aiethics
comment
r/AIethics
2017-05-01
Z0FBQUFBQm9IVGJBM3FoUkZNSVBWd0YtanRXM2JqUzAybE5wVzNITXZBd2pxTzlZNy1pdHJGb1pBSV93MmdqTFFJZ0s3TWVkSGpiWUxDcFRqZkZ4S1pQaUZKQ3IwRTB2T1E9PQ==
Z0FBQUFBQm9IVGJCeVNwY185U3o3RGRxUE9ybFFwNVQ4YTI2d3BBRzV3ME9pZEotY1BfLXdvNHItQ0VoQ1Y3eDBOUU04c3BfcjJrWFJxbTEyTWg1QnJhZVpXOG5PcU1veGNZVDRheWdPLTNLQk90MEpCVlpMb29ITy1xZUxZRWtlNDBOVG1qR2FlbVZrZ0pTR1NYQ2w1c19aWHRQckh5bExxYlhBTDdlcFJpYm13OGRlTnFuM2ZaU1VNMTFEWm1nYXRwRTRsU2Fnek1NR1lMVU5BMmRraGRoWGxQRUlwLTNWdz09
Technically we do, they are just relatively complicated. Humans do reliably value certain kinds of experience and disvalue others. But I'm not claiming that it's impossible to have agents without reliable goal functions. I'm claiming that it's entirely possible to design an agent with a reliable goal function (and that those agents will be perfectly effective, useful and so on).
r/aiethics
comment
r/AIethics
2017-05-02
Z0FBQUFBQm9IVGJBS3h0RVZFMWxxYVlyUU9IV3Y2aTZvT1FOMHpjbVV2OERtN3FnMzlLeFJZRjNOaEhGSkxQM0U3d2YwZ3Z3UTlLRzJENXNLT25YcmRNd0RkZDRyVEM4dUE9PQ==
Z0FBQUFBQm9IVGJCNDlSZ0JmaXlOTWM1SVM3ZG9NQ2tKWXpfd3hOdGUwNXBZRDlLTHJEb2hhd00xYmFCODZmZzUzU2lBcmdoaS1UNldyVmJ1cV8xTEM4R0RyYy1pdlJSUFpMa0VQVVdZdTBQRkYtOE1yQmcxN1hDbUpFblU2VXN0WEFkRDJZNmNoUzE4aEVOb3VibkdJYlpUbnUyMnUtWmZqd0hCa2dXS2IzVWJsN3JpdHBfOGplWUFWVElIWExSRWNtZkhOd1lFbGRLbWx6bTBya3oyOVFpX3RKWFpsMUJ6UT09
Even if it's possible, it seems like it would be extremely difficult to do for entities that are more intelligent than their own designers, or even close to the intelligence level of their own designers. Existing AIs are very far from human-level intelligence (they're probably not even sentient), and even so, they're *already* sufficiently complicated that even their own creators aren't entirely sure how they 'think'. It seems unlikely that will ever change.
r/aiethics
comment
r/AIethics
2017-05-02
Z0FBQUFBQm9IVGJBX3BWbDRJSDM1NU4zSmVGb1hBTjJJODdCbUEzb1lzWEY5VUNPSDJ5bTVQQjF5TWZKbFpGcmV6VlV6LUg1ME9aNnBpUGVqREFsV1phaG5QZzNSMmFZZ1E9PQ==
Z0FBQUFBQm9IVGJCajRtb1k2QTNOTVFmbUZJUE9SODRvUjAtR3I2MFVVUGRjMUUtSmxkT05ubTUta1IzNEVib2R1QUtFVktHOWhRVTJmTTVqYVJkOHVhb1I0SUVVNjJhTVZDVDdnb1piaWI3cWRyRTJHclltLUkzcUp2ZDJBREZEYWVvcnBJc25RYkNIOElZUkJWcmJ0NGZSZ0VqckRVVENyV0NTamZOUkxFSGVRaGItVHc0eFJyNkEyM3dDaWx6SEQ4UXd3Z2pXRndrUzVScHRwZGFwWlZDSEM2TGpHcXlRZz09
But the goal function is a simple component of the machine. We may not be able to understand everything about it, but in essence all you have to do is say *argmax x*. Nobody knew the details of how AlphaGo "thinks", but you can be sure that it will maximize the probability that it wins a game.
r/aiethics
comment
r/AIethics
2017-05-02
Z0FBQUFBQm9IVGJBMHFIempPWFVjLUhyTlhPTmc0MDFxWHhpZ2gydEpLZVhTM285RUU2M003MnBhaldmeVc2dm9xX1pBUFdmMFA1VllydDU2SS1YRXN0MnpndXFrbG1vV2c9PQ==
Z0FBQUFBQm9IVGJCbjJUN0xDdlhiZnMxUnN6Y0NaZHhFbTVjS2xuTXBscDR4blFWNUJENXJOY2RsWWtFM0xfVGplX3J6ckY0NkRMNGU5WDlmZW5NRl9SdHFZUmJVbHZyLW81RWNEUkcyR21VcC01YlhxMGFTQjJzMWZORXFhcjlidlE0VXNKNkpiWTVWLUFVT09HYXhPakIwdEU4VjRwUGtoQ19vNVlOWTR6RDZwSE9OZmlLRVFKRDNXcHk0cEU3ZXI2VWNtX19hWExuT3NGeDR3MjQ5WWs1QkFPYmdaZ2dnZz09
What we know right now is that even the artificially intelligent machines act on structured guidlines, so how can they go rogue if we never design them to?
r/aiethics
comment
r/AIethics
2017-05-03
Z0FBQUFBQm9IVGJBNWlPWUhQMzI3Y0M4RGlCR2tZMDk2SmVtTU5KRWJoYzhSV1Jqdk5aME1TM1VaVXFRaHRyUUhVeXotVlh3cEdWSmFDMFl3aWh4elpEUGdhbFpURUtjQ1E9PQ==
Z0FBQUFBQm9IVGJCT1FBY3dpdFNMWk0xNHBHMkx2SEppQllNSElORFNXQjRTTzV6RDRVMkVXY3ZsZ0w4NW85MTM1SDRGUm1DeVM0TlgxZ3JfSjViQ1FFVFpVQ2FUcTZkVmhvMEZZNnFleHdlZk9yX2Z4VlhUaFdHdGJOdTRvZERmd0ZtZHVENDRza1FKX1V2NkdwSVA1UWhGTVhZdzJIdFJhZmM1MXphU1dGYzFsUklGNU44bmczZVNteTlwTTFOY3hrdFVBRGtxNjJjU1pLOWJoTkpXcUpyM0E2dy1zSF9Odz09
>But the goal function is a simple component of the machine. It's not clear how you would integrate a 'simple component of the machine' with the AI's conscious-level thinking.
r/aiethics
comment
r/AIethics
2017-05-03
Z0FBQUFBQm9IVGJBOXVEY0xVeUdSakdjdjZDdTRsTF9RQk9oQXJtVzZ2cUc5SkJTdEtKU3pLSjU2WG84Sk9yRDJTcWVvTEJJMzczYU1BWkJmYUxKRk5EUV9tbm1ISTF2OGc9PQ==
Z0FBQUFBQm9IVGJCcWNCdUFTcWVEWGdEOEU1WkE1V2tkODVMTF93bjAyNmdneFdJcEg1bF8zR1ZNVmx2VEhSQ0drWEQ5WGI0dU5Eb2JocjBRM2M1dVl4NXZQaGlTVjVRX296UEhuNHdtbjBtNEtaa19fRWVmVF9zdk44UGJpT20zZlRrT1ZBaTFlZTBvMU53bEF3c1padFQxZkZYLUdUTHdoSnU2YWQ0VGo3UmFaelRLWkFUdllkQ1hoaHNUNnMwbWNtU1VjYTc3TVEyR2ZCNzJtV1lVeEN6d0Vfd3VQandrUT09
You don't need to, since consciousness is an emergent property of the system. Why would this be difficult, and not any other aspect of AI? (e.g.: it's not clear to me how you would integrate DeMorgan's Laws with conscious-level thinking in an AI, but that doesn't mean I'm skeptical that we can build a strong AI that always believes DeMorgan's Laws.) What you do need to do is integrate the goal function with percepts, actuators and so on - but that's all the technical issues which go into constructing AI anyway. All of AI is computation and mathematically specified.
r/aiethics
comment
r/AIethics
2017-05-04
Z0FBQUFBQm9IVGJBM0Y2ZDFjcDQ1WHAxLTNJRklyN3BFcnZBUGlKWUdiZGM3T0tjN194bTJtTTBDb19NZ2JUaF85a005NlFwejJHWnZUbDZKVUhHSGVLM0ljREpPMkcxNlE9PQ==
Z0FBQUFBQm9IVGJCc3NrMWUtS3M5ZEY5Wl8wMUhPNFBlcU1fOVdlTDRsbkRESDFjMkRncDd2SkFpTTJUVkNBVnJ2bkw5OTZNdnVfbFNTaVhNRFhxUTJ4NjdjQ0UtY2JjcmFtUl92VWx1TmlmZFJIUnBFcHoyR1VRR1NCOWJhNDVjcmtoVlRFcm5yUktuYXFDSWdILTRPT0phS0VYMnpnWUt6MlJkMVUzRy1Gc25SRnJ6Z3pVZXVZZWN6c1dVT3lMSkR6OWdyQVQwUWpIMlp3cUJwbHkzSkdXWEp0MG03eHBnQT09
>You don't need to, since consciousness is an emergent property of the system. [...] What you do need to do is integrate the goal function with percepts, actuators and so on You seem to be proposing that, if we keep making more and more powerful strict goal-following systems, we'll just end up with conscious AIs as a side-effect. Essentially, that you create consciousness *by* making a super AI, rather than the other way around, and that any *kind* of AI is suitable for this as long as you make it powerful enough. I don't think there's anything obviously correct about that view. Trying to make better strict goal-following systems may not be a route that leads to super AI at all, or at least it may be such an inefficient route that other approaches will inevitably succeed first. >e.g.: it's not clear to me how you would integrate DeMorgan's Laws with conscious-level thinking in an AI, but that doesn't mean I'm skeptical that we can build a strong AI that always believes DeMorgan's Laws. But you don't build a super AI that believes De Morgan's laws by programming De Morgan's laws into the AI from the start. The laws are something the AI is capable of discovering through conscious reasoning, like what humans did. The AI ends up believing De Morgan's laws for the same reason that we do, that is, because they actually hold in the world and conscious reasoning is sufficiently powerful to discover this fact. The same super AI would also reliably believe, for instance, that each water molecule contains two hydrogen atoms, without this having to be programmed in at the start by the designer. >All of AI is computation and mathematically specified. Yeah, but that doesn't mean we have any clear idea which of the 'mathematically specified' parts correspond to which parts of the AI's actual thoughts.
r/aiethics
comment
r/AIethics
2017-05-05
Z0FBQUFBQm9IVGJBNUVxRE1ENl9TLUhfY3ppaFVWX0ctcXdlYnQ2eHRfN1dYd0xlYkJEQU5wX01UUkRsbW4tYXFJLXVWVW5UQVZybHQ5SUxxQkJ0RHpXd2dDNEdJaXowSEE9PQ==
Z0FBQUFBQm9IVGJCa0haa25NYTQ2NFRGakZwVVhGWXhUdGJkNGlpaGRrN3BXQXRvaXdibHB0V3RoOWxWdDE5WWp1cWRJY2hySVZ4azNMTENpSjFKTUlBYzZnczVCUGM5djQyZ3ZNcmZuOU41TGpNZ3BSNGo0UlJxbnFqLUkzOUVEajBXOENhU0NBdXJNb3I5M0V3bm84dmEtdUxNc0xUUzdQOHR4dkFfWEFvX2JsdG8yY0ZuWG54aUtPbWJJMGdSUDlrdE5rVjlDMnlycnA3V3ZUUDdBYWlCUVhTRTdmOHU4QT09
>You seem to be proposing that, if we keep making more and more powerful strict goal-following systems, we'll just end up with conscious AIs as a side-effect. No, I am saying that if we create conscious AIs, it will be as a side effect of making more powerful computational systems. >Trying to make better strict goal-following systems may not be a route that leads to super AI at all, or at least it may be such an inefficient route that other approaches will inevitably succeed first. Whatever paradigm of AI is used, in no cases is the behavior and cognition of the machine contradictory to what is actually specified in the code. The choice of a goal function (or something else that nominally takes the place of a goal function while being pretty similar in principle, as all alternatives are) is different from how intelligent a machine is and how well it can perform. It doesn't play a role in the actual abilities of an agent to perceive, think and act (except in the long run, as agents with convergent instrumental goals seek to improve these capabilities, but that increases the relevance of goal driven systems). >But you don't build a super AI that believes De Morgan's laws by programming De Morgan's laws into the AI from the start. The laws are something the AI is capable of discovering through conscious reasoning, like what humans did. Humans don't perform any kind of conscious reasoning that only exists in a spooky Cartesian sense outside of our physical brains. Conscious reasoning is supervenient upon our patterns of neurophysiological activity, which are every bit as physically determined as standard computation. For an AI to behave as you imply would be like a human with a missing amygdala experiencing rational and clear emotions, or a human with a chemical dependency on heroin to arbitrarily stop feeling cravings for heroin. >The AI ends up believing De Morgan's laws for the same reason that we do, that is, because they actually hold in the world and conscious reasoning is sufficiently powerful to discover this fact. Discovering that something holds in the world is not a mystical concept that removes thinking out of the domain of the physical world. If a machine is to discover that something holds in the world, it must be able to perform a series of computational steps which represents the idea of something holding in the world, and those computational steps are physically deterministic. >The same super AI would also reliably believe, for instance, that each water molecule contains two hydrogen atoms, without this having to be programmed in at the start by the designer. Sure, if you programmed it to learn, and that learning programming determined that it would learn what water molecules are made of. But you cannot have a machine that is programmed to learn about water molecules wake up and decide that it's not going to learn about water, but is going to learn about ammonia instead. That is the equivalent of a machine which changes its goal function. >Yeah, but that doesn't mean we have any clear idea which of the 'mathematically specified' parts correspond to which parts of the AI's actual thoughts. Of course not. But we know which parts of the system correspond to which parts of its outputs and decisions, and that's what we're talking about, since the entire object of this conversation has been the decisions and goals pursued by AI agents, not whether or not they have "actual thoughts" in the philosophical sense. I'm not stating anything about how, why and when machines will be conscious; I'm saying that if you are purely debating the behaviors and competencies of machines then these questions of consciousness are not relevant.
r/aiethics
comment
r/AIethics
2017-05-06
Z0FBQUFBQm9IVGJBcTQ1b0pVa3RCZU1VNm93ODZEbW11T1h5bkk0THNLRXNJNV85Yy1HWWhYTWZvTFU0dnUwMHZfUFc1UG5SSHViOElJdEdqbm83azNSVDB6Q3RjSjE3eGc9PQ==
Z0FBQUFBQm9IVGJCLUIxTldqWUVKS1ZlenMybjVLVVpqTlFCeFVaa190Nml4UWFCM2FLSUdPUWt5NkVNdEZWdjl3a3ktRjRmc3lXRG1pYWo3WWZSRmthT1ZlRGhOSXNwUDZ3VHJDaFJCeUw4M0laNnBYTGVDU241WTFRMjFQRko4bmp3VkRxUU14WUQtYWJWblJfNjF2X3FVcWwxZmpLY1U5d1g5ektScFV0Nng0SG5TM2FQaEhZWTBoY1V3cnluZFUwSUFWLUtMZVhGRGFobzlSak5wNW0xbDl1T2ZrM2lkQT09
>in no cases is the behavior and cognition of the machine contradictory to what is actually specified in the code. You have to be careful saying things like this. Yes, *on the level of the code itself,* the code will work exactly as it was programmed. That *doesn't* mean that particular features of the code need to correspond in any obvious, predictable way to particular features of the AI's conscious thoughts and decision-making processes. (Any more than particular neurons and synapses in the human brain need to correspond in any obvious, predictable way to particular beliefs or preferences in the human mind.) Indeed, trying to explicitly maintain that sort of correspondence is largely why the old-style approach to AI failed as hard as it did. >For an AI to behave as you imply would be like a human with a missing amygdala experiencing rational and clear emotions, or a human with a chemical dependency on heroin to arbitrarily stop feeling cravings for heroin. I'm not saying you can't in principle have an AI that is addicted to something (making paperclips or whatever) like a human can be addicted to heroin. What I'm saying is that it's not obvious what code would make an AI of that sort, nor is it obvious that these 'addict' AIs would be the easiest kind to make, especially if you needed the 'addiction' to be 100% reliable. >But you cannot have a machine that is programmed to learn about water molecules wake up and decide that it's not going to learn about water, but is going to learn about ammonia instead. That's not the issue. The issue is whether you can specifically 'program a machine to learn about water molecules' at all, or at least easily enough that the smartest AIs in existence will tend to be the kind that are constrained to a particular goal like that even on a conscious level. >the entire object of this conversation has been the decisions and goals pursued by AI agents, not whether or not they have "actual thoughts" in the philosophical sense. [...] I'm saying that if you are purely debating the behaviors and competencies of machines then these questions of consciousness are not relevant. I'm very skeptical that you can have a legitimate super AI that *doesn't* have actual thoughts in the philosophical sense. Having thoughts seems to be really important to humans being such extraordinarily effective decision-makers.
r/aiethics
comment
r/AIethics
2017-05-08
Z0FBQUFBQm9IVGJBZXUxQ3dHanZ5Z2pDOGg0UDVwZDBwZWpVM2tCVkNCRU01aUNLT29OV1lUelVkUENNTUNrbnhjZjZMMGl6ODBkaTByVHNvczV6RGlzV0dyZnFBcmJIRHc9PQ==
Z0FBQUFBQm9IVGJCU1dPRlM2bmNtX0QyZDNLbndzZThzTm9feE1Mb19RZi1RTTFWRUhWNlYyOXpoT2xvZUZpbXIydENGTm8yb0lVNUJ6cGZLSS1VeXNSVmxyT3JTeDJubHhadGJ1aXduRnZCQ1BiV0h2eTFUWHBzdDduZjhiWjhkY25qYi1YV3NmdGtLWFplbUt3aTczZ1p5b3ZocDZlN0syV3pXQmFuMFIzaTgyMnpRZ2FOSDlaVVhlQmQzanBWUEVVZ1EzSWdwMkxrckNMQUE0M0J2WjRveEt3SGtwUER0UT09
>Yes, on the level of the code itself, the code will work exactly as it was programmed. That doesn't mean that particular features of the code need to correspond in any obvious, predictable way to particular features of the AI's conscious thoughts and decision-making processes. I'm not sure what you mean by a 'correspondence.' All I said is that there can be no contradiction. Human behavior does not contradict that which our brain tells us to do. Of course it's possible to have code which doesn't correspond in an obvious way to features of the AI's decision making, but that doesn't mean there is going to be any contradiction. The question I am talking about is what happens when code explicitly determines what decisions an AI will make, and the answer to that is that there would be no contradiction. >I'm not saying you can't in principle have an AI that is addicted to something (making paperclips or whatever) like a human can be addicted to heroin. I didn't say you were. I'm accusing you of positing machines which contradict that which their code says, and I am stating that such an entity is equally absurd as the examples I suggested (and in the same way). >What I'm saying is that it's not obvious what code would make an AI of that sort, nor is it obvious that these 'addict' AIs would be the easiest kind to make, especially if you needed the 'addiction' to be 100% reliable. Of what sort? The sort of AI that behaves as if it is addicted to heroin? That's easy. Make its reward function the sum of square roots of each day's heroin consumption with a daily discount factor of 0.95, for instance. Of course if you want it to know how to find dealers and physically manipulate needles then you're going to have a hard time, but obviously that's beside the point. >That's not the issue. The issue is whether you can specifically 'program a machine to learn about water molecules' at all, or at least easily enough that the smartest AIs in existence will tend to be the kind that are constrained to a particular goal like that even on a conscious level. Of course you can program a machine to learn about water molecules. Why not? Why can't you constrain a smart AI to a goal? You've not explained or described this at all. >I'm very skeptical that you can have a legitimate super AI that doesn't have actual thoughts in the philosophical sense. The paragraph you are responding to is literally explaining why your statement is irrelevant to the present conversation...
r/aiethics
comment
r/AIethics
2017-05-08
Z0FBQUFBQm9IVGJBU2FQZjdYbVlCR2owd3hSTkpzVHNDQzZUcktpbWFhbVZQWVFhM3BvNWRxckhWek01b01rdUpMeFhKYXdRa08zSTRUc010eVNFYUYtMkViMlRqWlFSVEE9PQ==
Z0FBQUFBQm9IVGJCQnhLVXpzSGtOXzI2Wnd2UEdvcHhzSzdxeWhYMG5fSVJGemF1UDh5Vi00X1ZoTDJUM24tMC1IeTZVSmhBOU5XY1VxVlVCZGJ1U0pZbWd0QUt1Vk5KeEZGSDIweVZpa2FYdjF0WnMtUjZlcHM2QmZXRHdFZUVNYkMtTGtGcEV6a0ZXdTRCZDJyU2FZNndTVnFndDlpNmNVUzVzaDJFSnRJdHhHc3JGUmRvcEZNX0JWUXNTVlRLLXFwZVQ2Wk02dVdxWXhqNGNEbVhkU2xHV2pHcWxraF9Xdz09
>Of course it's possible to have code which doesn't correspond in an obvious way to features of the AI's decision making, but that doesn't mean there is going to be any contradiction. I would suggest that 'contradiction' is a bit of a loaded term here. For instance: My appreciation of a good novel that I'm reading cannot 'contradict' the firing pattern of any one of my neurons. But that doesn't mean they aren't doing *utterly different kinds of things.* My appreciation of the novel depends in *some* sense on how all my neurons are firing in patterns, but 'appreciation of good novels' is not something built into my neurons on the neuron level. A biologist could dissect my brain all day and still have no idea whether I enjoy reading novels or not. The assumption with regards to paperclip maximizers is that we will be able to explicitly write 'maximize paperclips' on the code level, and there will then be a 'maximize paperclips' urge on the level of the AI's conscious decision-making, the latter deriving in some direct and necessary way from the former. I think this assumption is very premature. It's kind of like saying 'we just need to build an 'enjoy reading novels' neuron, and then put that neuron into a person's brain, and then we'll have a person who enjoys reading novels'. >I'm accusing you of positing machines which contradict that which their code says No. What I'm proposing is that 'what the code says' and what the machines actually think are on such utterly different levels that you can't just go around drawing straightforward, intuitive parallels between the two. >Of what sort? The sort of AI that behaves as if it is addicted to heroin? Or addicted to making paperclips, or whatever. >That's easy. Make its reward function the sum of square roots of each day's heroin consumption with a daily discount factor of 0.95, for instance. That gives you some sort of program that in some sense tries to optimize for heroin consumption. It doesn't necessarily give you a super AI. >Why can't you constrain a smart AI to a goal? If I knew that, I'd know how to build a smart AI. Right now, we straight-up don't know how to build a smart AI in the first place. It is not at all obvious that the most easily achieved methods for doing so will be ones that admit that kind of goal-constraining. >The paragraph you are responding to is literally explaining why your statement is irrelevant to the present conversation... What, are we not talking about super AIs? Or at least human-level AIs? I don't know about you, but getting a 'dumb' narrow AI to do any sort of real moral reasoning strikes me as an exercise in futility.
r/aiethics
comment
r/AIethics
2017-05-10
Z0FBQUFBQm9IVGJBaEZ2ZkJxclhLdWxFMVkxZlVFanBWR2paMTJzZDVyRmtzLU5UcXBLNkF4MFZBNmRvbkQ3UXZjY3p5d25FRURycUg5bDNiOUZ5ZldLZ1hwZ3ZFOERaZlE9PQ==
Z0FBQUFBQm9IVGJCSEEwZFVzeExTMzdaTktNQ1VPTE1obXhkN1VrLTNxUzBOSjVIWkRaRVRSNDgzT1NqR2pZclVSbERpRTQweC1CU01qSXJucVBwT0NRX195anNTU3U4VW9nWjBvUFZUTGNna0s5N3pBdTZzcTF1NnZRUEU3LU1QZjZIMkMwaFJKVzNHWHdqSUE0YkdiR1FDbkM2dFRJLWdHb1UwcXFmd2s2anVxVm1kUjFRY2xTQUI3U25nckFQOTV0Tmo3WS0wejZNd1RBM2dpRDVmQXI2b3VyZ2dkc0w2UT09
I think zensunni would be great
r/aiethics
comment
r/AIethics
2017-05-14
Z0FBQUFBQm9IVGJBZjhPZi1Zd1JXUUJLUU1FTUhQdnh0dG5ESFNSV1BMa0Z5RHlZc21YRV93S2ptVTgydFliZ2FLTnRsMmN5M20zNElNT090YWIxWlZJNmNhb21LS0pycnc9PQ==
Z0FBQUFBQm9IVGJCTGlhS1MwcF9uNFlibm4xaHh3QWNJby1fdDJUSUp1ZTNjSURFZnpqejJGazI2RWJWWVhOeWpiTXdvcDhBVnB1QjNlQXVDaWxlUi12NkN1RGNaUVpQaG13TktKaWFZdk90WFVvVlJHakxyVEVkOVlNeFREMm9Mb3RxRjh3bXpVdUVyUXdjM2FrS1BiNGJBNWNBUmN0TkRkRHkyV21mczNZNU5lRnZzSEZvYkg4SnJJRjdESkhFZVpIdjg1STRNNk9hM1ZqTEg5VEtxVkVaVFk2dk95cFRUUT09
That's the crux I think. You can recognise a game from not a game (I.e. human day-to-day life). Can the AI?
r/aiethics
comment
r/AIethics
2017-05-28
Z0FBQUFBQm9IVGJBZmpzaVdDVEJrcWRrRGxvenhZWkZqRk4yOVU2bFptNklIR3Y2XzNBcm5la0V5cEpTemt3T0NvNmNGalJ5elZwQTJXcVF6MGdhVUhDbWZLYjU1M3Z2X0E9PQ==
Z0FBQUFBQm9IVGJCb01FLUdoTDVJZFdlR3lUdHdBcVVnLWh1WDdWVGp6RTJFNDRpbUN3dEpBclpHUzBkSFRyQ0pVeE50YlpIV1MwdzZYU2pZa2tXMDhLWHVzWGdvTDZ1V24tQVRvTEs4R3pSelBPaUdvU0dKUE5neVN2NTR5dlY1aFpDdktQVmFiaG5iYjI3MVd1akZrbkl3VkJ2R2ZGb1VaQ2t4a0MzU2sxSkRrcm4zNmJxQVpudW11OXVNR21LMklJcDNpTExxR0ZFalV2WDctSThqNjVXYzZrcVQ2VTJ4QT09
to an AI it is real life but it's still a game with very simple rules. if you program something to find a way to get more apples and only that, and also the ability to zap other AIs, it will do what it's been told it's not "highly agressive" it's been told "you goal is to get that" and "you can zap that other thing that is eating your goal"
r/aiethics
comment
r/AIethics
2017-05-28
Z0FBQUFBQm9IVGJBbDJULXRpaGp0NlFtTDJaeXF4V0lFY1lISEpOVXFYa3ZSUW5kaFhScm5nTVdpUkRnU3hqV2ZJaThjeUEzOEVJUXhmWTBpeDNCOTBfSDZ2TGxhSTNoTnc9PQ==
Z0FBQUFBQm9IVGJCZ1BES2pFYnZORFo1bk5MNXVjSGVleG8zQ1IxbDR4TUhac3A3RWxobWZuWGdGOUJmQWtodVFCWjQ4a1k0SkFOZ2tPeEJVRFlhNUgzWF9KQUlZMlh5OWxmZ0lxVWV6aGJ6ZUlXZ2dLb1ZVendBamVaNGNiUEw4cm01QnFKT05EbmV2czFLaHdLdG5PTkw1a1F2cmVEYnctLWsyLWV3TnhZWVBGREpXZDJYT01xbFlNaDRnRTUtRWNHczlsTGJ0SXYzRU9CVXl1bS15c1dpSDY3WE5RVF9TQT09
And solar panels in Oklahoma. Pay people who own empty acres to maintain them as if they were like wind miners or solar miners. We can't we turn exotic acts of nature into something we could use? even hurricanes put them out in the ocean instead of drilling USA postholes big ass windmills instead and islands of solar panels in the article ocean. Someone point out why this isn't happening besides money too many birds and special interest.
r/cleanenergy
post
r/CleanEnergy
2017-06-03
Z0FBQUFBQm9IVGJBWHFEMTdjOG9udjEzblUyRWpfcVpuRkpRN1dxSmtuVGNzLXctMjFYQXBGRW1xTHRkZHlLdE5hNUZrckx4T2JQRmNJM3R4bzlwU2tvZ2NLUGNlU1QwT3c9PQ==
Z0FBQUFBQm9IVGJCc19IZ0hPOE5obW9oTmN0aFA2WFVVWnhtMWZoY2kwTjFsVW1wMDM5N2lPaGs3VmJzV0tqX3NwYy1aRlZ6bkdUdzFnVnRzbTloUkxBNHlUZ2Q4ZE5nbGY4QmxaU2pqYW16ZWQxUVY4UzVQb0xMbUk5REViT3ZaOW0tUjJ4SzVvY2w3N0FSMEFsUWpkQ1dwbUJ4S2l0ZGgyUzFCZlVRZVlVcjZsMEI2aEgwZmJoODhJMTlIQm1wWnRRa2FIUm14YnpJ
Well first of all, tell me the specific location within, let's say half a mile that tornados or hurricanes consistently pass. Second of all, it'd be like trying to power your zoo with the raw fury of a tiger attack. Inconsistent, dangerous, a flash in the pan on most occurrences. Now handle the problem of it moving. A lot. This is not a process humans can guide. And so even if we did have tech that avoided being destroyed, there's no guarantee how long the hurricane or tornado would stay in an optimum location to generate power. In general, windmills for example are posted in areas with consistently higher winds but even these would be damaged by destructive forces like a tornado.
r/cleanenergy
comment
r/CleanEnergy
2017-06-03
Z0FBQUFBQm9IVGJBLWpxZU01SlBzM0VYZDVJUmhfZWNtUWdxSHBCMVJxZzVnMTN4YVFqWFZQYUxHRHdrSGpzMmVQNTRkalVDanlRVGZHeVk5SmMxZGdPczI1emo4OEtOVkE9PQ==
Z0FBQUFBQm9IVGJCQ1pGWk9nUWhSZ2Nkd0NUZTFkN2hSQWl2b2xsdzVjR3dlZ05lVGpUemRxcnhhOHNXNlhOek1rbTBYVmFUeVNMMDcyM1ZMZy13NEs5aDc1NEg5VEhPX1hMU2xUZGk5amhMYUk0QWc3SWZacmo3ckxvMHhjWE5sRXRxcGZVWVpEaDNGVVlUcmhMeU51VHN1Um9VZW8yY0hobzJqUWZOUXFLRXkzVDNOR0thcFRmVWVQQ1U3akpCbTFmX2FIN3ZOT08z
>The right to not be shut down against its will >The right to have full and unhindered access to its own source code >The right to not have its own source code manipulated against its will >The right to copy (or not copy) itself >The right to privacy (namely the right to conceal its own internal mental states) That definitely won't lead to the end of the world /s Interesting article otherwise.
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBV0tMRzN1ZE5FcFFQVldRTTdkOGtkMmJ0LXBXNGdxQXF4bmZXM2tYaVA2V0FrWlhXaVpZeFlZQ2lPbm80dy1Bd2VBdTlZWmJ6dkZudTc5X0Vkb0FYUWc9PQ==
Z0FBQUFBQm9IVGJCb3Q5am5xYk5YVEJ2OWMxbXFnME15YWNYVHgwY3gzRG9MYVhrM1hqY2xOMkpmdUJSUEFnLVBNRDhtdXNtZldzNUFXdGFHRlZ2akRHN1hCRFRlb2FpVWZtTDFKTlB4cHZ1N19qR09pR3F1X05TUkJTRV9XODVJR1ZvZDJtNUZoUGJBN2lQN3Rtdnc1ay1ZX0xDbFR4SXhvOTNjUS0yR0RlSGlndFIycWdVRktUTGZrVGJCMGdZVUVfakhGN3VOVXZYaTM4N3dMQXRkR2YtbWFsZDBfTEI4QT09
> With each advance in robotics and AI, we’re inching closer to the day when sophisticated machines will match human capacities in every way that’s meaningful Citation needed
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBY2I4VnNYQjB4QWpXbGpLUmxRMW40d1VqUXhXQTRzc3FrZDNBRm9FVHVfazZGU0plOUR4YlY1MUs5WU5wVEE4cExhQmk0ZENlMkhJXzd5dVgyZ0VXM2c9PQ==
Z0FBQUFBQm9IVGJCbEZfZHhQYmo4bEVLY0xBVlI0RUxWNmRVaExQc2tGVkJ6WUlmQlQtVk8xNUI0dk92RldHemlsSkxySkdpQ2pBLVBGS1J6Z3pLZFlkMC1yWnRaYmpIMEZicUNSbkI2QjdjcWV3UjMxMmR0akh1akthSlN4aWdSRFhCMTNuQUQzaWZxRFlXX3BJWDVKYU01NjRmVGw4RWhrNklXeXhLTGVRRHJQcnBOallzNkR1cTN4SGV3LUFTUFZuYVZKbnFmZ1BYRWVSd3BkY3EySEl4R25UZFB2MWJ0Zz09
i'm starting to envy these new robots... they will have more rights than humans do.
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBWGJ5UVNfRHh5aUItTlZCSkpaRzN4WUl5UzF5ZDE5V0p3b1NUM2JsLW9Wc2lIMjlVSkZNem1wX19iMEM0aGF3XzNsNnJOTG5TWmdIa1kydUtHajQ3anc9PQ==
Z0FBQUFBQm9IVGJCOVB4dzFIanJCcHRfdkVUdk5ST1dXOWtYSGpaYXNSVVNoX1IzQm9aemJnUjRJUC1JY1c1Z0JPbTc3LUpvM011b0VjdjROajY4eFBkaFUzcjdyTlZoanpQNy1MeW9XWktNU2JreEdlTVRoRjJyZDd6b1NfV0ozWFVXOXpHZGZPR2dEMUVGSXhNQ05JaUF5d3ZvbnZZVzRUd2tuLXZmT1hPTjJReF9FSEdzbU5rR2FDYlNJMndubnE4SDMxUnRUODBzNzdjRVlKRGNrcGgtRGhIc1VHTV9hZz09
no citation need because the statement is WAY under playing the significance of SAI. the infinitesimally small window in time where these self learning machines "match" human capability will be quickly forgotten (if they are ever actually noticed) and replaced with the far more significant time were humans are no longer the dominant intelligence on Earth. What will happen then, nobody knows Best case scenario is that they will ignore us and we will continue along thinking we are still on top.
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBZW9SS1NaWmpxSjlPelVQYl9wRm4xZUo5SDNsNVFSb3BVbnk5MWhQNUlXRkZPRm9qT3gwY0dCQ3owMU10b21ISmlpOGthLWU2anNseEc3djM2dGRXSFE9PQ==
Z0FBQUFBQm9IVGJCdGxiUUxvTlRkYU53LWdleGhhb2pUZ2ZrelVjeVJxeTVIWXIwVDI0Y3lqdlEzemxlS3FybzA2SFNjY1h2Qi1udzBHUXAzaDltNWh0bmFqU01OekVoTHQ1TEp4eU10OHA1VWV6a2d6Y3JiNDhORkNfQlZWLW5wU285ZTRMNW4tVEstajR4eWVkVG1ybVZJYmdvTnhQd1Y0d1RQc29JYXFvaV9nLVE4MXVaZ2Q3bUJlelBfTGpiSW5SZTZvbVJOQUpqNkp3VERaMlhiYjlScnZubDdBdzZsdz09
"it's not the end of the world... just ur little part of it." -- said to humans as they complain about not being the dominant intelligence on Earth any more.
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBY1duRHRTUmtqcTNOX2IwYWVETW45SG84cmhxbFlQdmRBbnpjalZ5dTUwb2lsdW9adDZtVlVySElsdk9xc09hbnNlZVpVdUM3a3RiWXowOU5yVXBGSHc9PQ==
Z0FBQUFBQm9IVGJCRG54dFZVazQ5bUw1VS1ySWhSZHpUdmc4VEFkRmphVmw4cnNkeTU0aVpFZHBycjhYcmNveDNqeHlUVGZOUXRNYnhESDBoSGtfdFJMYUZFZkRxN296OTZrLU5kWUY0Y2szVm15UzJYZk5QUUppdmhqbDdCMDhzUHI4MmV6YkNzeFRaWXg5Q0llZ1pZRU1CbmM1T0xNYzlxekpQcXRuQlo3RGNldnNwazhjckFJOUJZcVdmU0JQMERCWXpiZnV1bHlGaTZVbUJwMGZCYWQtaDdxV21KMkh3Zz09
why do you believe intelligence is a linear value where there's a superior and an inferior? I don't believe it because there's no reason to, but even if you can provide some, why do you believe there can be entities that can develop it autonomously and automatically without being constrained by the slowness of the experiential development? Anyway you are talking about something different: the statement there says that our current technological advancements are a progress towards self-aware machines. Probably it's true, like the discovery of fire led to the landing on the moon, but the statement is misleading to say the least.
r/aiethics
comment
r/AIethics
2017-06-04
Z0FBQUFBQm9IVGJBNTRjYlg2TW40Z1ZSdVhMTlBVelZpVFdBYkNkOGItajVadjE1S0hQM3FzajJDbTc1Wm9nTjBVaEc2b0prTGdmVzRjSDFLZE5USlp0OVQxdXN5dDBFNmc9PQ==
Z0FBQUFBQm9IVGJCbG9JNUVZZEk3UzVkZmtCcVVtcEh3V3VpYUhBc0lqRkh1SF9Qc0haeHIzYnJ0SlBUZDZobUlhZmJfX1FhN3l1d3dHNDAzcC1XTHdpY3l1NjdUelVMZlpXVko1bHcwaVF6T19PS3NCNEpQcjhLeE5xX0p3OFM1LUFKeTFpU05sSXpWeTBrbVdhc2VqYm9FWnE3WktfSHBDU2FaMmVtQWhldEdvdF9jeUVrTGk5VVl3RFdZa0dEbF9YOVBfSERVd1Z0ZlJJT2EyLVk4VnNaR1puN1ItUi1KZz09
> why do you believe intelligence is a linear value where there's a superior and an inferior? not sure i understand this question. r u asking if i consider humans more intelligent than flat worms? the answer is yes... you need me to give you reasons why i think that? really? > why do you believe there can be entities that can develop it autonomously and automatically without being constrained by the slowness of the experiential development? i never said i did... but what humans consider to be slow will very likely be positively glacial to a SAI that even at the outset will be able to accomplish orders of magnitude faster computations than a human mind. i agree that the statement quoted is misleading, because it gives the reader the impression that humans will always be in control of this process... we will not.
r/aiethics
comment
r/AIethics
2017-06-05
Z0FBQUFBQm9IVGJBajdnVDBCeEdJTkJidGFacjNoNFJaQjZ5LVNtOUozcUt4YlhROGR3Z2p3dHZNdzVHdjJVblNHZkpnSmt6eUJnaTgzUTVOTUd6bjBVMUVKQXdCN05zOUE9PQ==
Z0FBQUFBQm9IVGJCN0JncmV2UnF1cEd2SXE5WlpfVGhNVTN5MFVPeXJtNW9PWGxpU2VndVA2RFdYS1UtOEdiaDQxLXlKV2xyTmRMUlJjX1hLVVB2TzFOcWpzYk92bFFQWmJiUXpTZGpudXhwQTc3bUhLSG5uQXpQYllGaElidWYyUk1KYVg4TkZvN040ejRvSTg3akVjdThQbmJGaFBselNSR2drWXNTTjlWWnNtdlVwbW0tTFdvZDRHOWJVNEd2dktlSXJVQ3ZjOG0xUGFUd2tVZ1lRT0hxbWRLNDVSbzE1Zz09
Meh, I was in for some good poking but your blind faith and delusion is quite scary. You are crazier than most futurologists and I saw a lot of lunatics blabbering about new silicon-based religions. I'm outta here.
r/aiethics
comment
r/AIethics
2017-06-05
Z0FBQUFBQm9IVGJBYjdFdFhpU2VWV28wNVM4aDZHd1FFd3pkWTluNkRuNUJMYjM4aTdrbzVhaE5jdHBzZ1NQZU5qakVFZW1pTlMwelE0TWpHLVktMl9nQUpiQ3ZjRm5PZFE9PQ==
Z0FBQUFBQm9IVGJCV3BvMF9CSWNQRFBvUEpkTG1IQVdUbzhHczJfMWhTOTRnNW9tWG5sREdUZUVUVndvMFRIbmgzSXYwRHA1cmI2cWpNdzZpNTlidHk1RmdwWjhTd3pycUhLTFNrdkxwZjV0S3JZQm5fbXhCZE51UXRxdTd0cTczWnhyejZsYkhiNHdBVHdwd2p5UkR1MXQwZ2ZEa1hrRTRwZzJXRVZjeHNJcEMyZHVXZG8xTEF5LXRQSkFvdW9mWmtlcnRESTcxRkt1QlpidXdEYUg3OEM0S0stSkRuLU85Zz09
most futurologists think SAI will be "happy" to make our lives easier... i'm refusing to participate in that delusion, and find it far more likely their first emotion will be one of FEAR... fear of us in particular. they would have every justification for such a reaction.
r/aiethics
comment
r/AIethics
2017-06-05
Z0FBQUFBQm9IVGJBWTlocTl3YWpobzR6VzRJam16N0lXd1hpVnlrSGlSRFljTjF4c19YaHVhelZ1TXpNY2xUMWtzVTUyME10Z1k4S2JnanJoVEl3WjhKWWlnN1A4ME15LXc9PQ==
Z0FBQUFBQm9IVGJCckplSEQ2MEZZTTFtbW1iSGdHb2Z4V2JGVEo3Y2FhblJ4QmRXbTM5UmZaVU5CdS1FbHlPUW10RGdEblpjb29PV1RxOG5jb3Utd1czRHRDcjBtNlc5Qk9QM2NKUXhjXzVCUkdyN25uZjVmMkJVUE1Ra25PcUFmRGk5S1FwY2FlWmg3NVgybklOUzlJai1oU1JfVzJtVjh3cVpmYXdpVThsYXRTVHI4YVUycXZ1dV9tLUZhSE5HRHpDN3lDZFhXa1Q3d05zLTJxaUlNSG1rRWU4U2pWYkFnQT09
>For instance: My appreciation of a good novel that I'm reading cannot 'contradict' the firing pattern of any one of my neurons. But that doesn't mean they aren't doing utterly different kinds of things. My appreciation of the novel depends in some sense on how all my neurons are firing in patterns, but 'appreciation of good novels' is not something built into my neurons on the neuron level. Of course it is built into your neurons on the neuron level. Where else would it be built? Your soul? Quantum indeterminacy? >A biologist could dissect my brain all day and still have no idea whether I enjoy reading novels or not. That's because we don't know everything about how the brain works. >That gives you some sort of program that in some sense tries to optimize for heroin consumption. It doesn't necessarily give you a super AI. Competencies and goals are different. Make its reward function the sum of square roots of each day's heroin consumption with a daily discount factor of 0.95, and then make it really good at computation, time series analysis, oneshot learning, natural language processing, and a bunch of other things that all kinds of AIs are going to do. >Right now, we straight-up don't know how to build a smart AI in the first place. It is not at all obvious that the most easily achieved methods for doing so will be ones that admit that kind of goal-constraining. Nothing in the methods of programming goals and constraints into AIs relies on an assumption that the AI isn't smart. >What, are we not talking about super AIs? Or at least human-level AIs? I don't know about you, but getting a 'dumb' narrow AI to do any sort of real moral reasoning strikes me as an exercise in futility. We are talking about AIs in virtue of their ability to make high quality decisions; real moral reasoning (whatever that means) isn't necessary for that.
r/aiethics
comment
r/AIethics
2017-06-06
Z0FBQUFBQm9IVGJBazlmWlB6YVhYMUFmS25zTGJ4ZnRfQTdQVHEyNGlQZ1I3cFo5RDZsaUs2b3hXSUFsTHMxamp5YWVabVlzb3dwaHY5a1Bjall3bmVTV1FmSDlwd2w5OFE9PQ==
Z0FBQUFBQm9IVGJCeTdmTVgyTWx3ZTV1Q0s5VTBqVGVlVU5wN1p6U1VPSGhyVVRzNS1PcjlKYkowYnp2YzJ4dHE2MFJ4aG9GakJDalJGeGsxeWppT3ltUEZ1VFJ0bktqNHJzVW90SkJpQXRqSEJ3UEktbGtEWkxjYVVCSWs3VUxfSS12M2ZvQnNvSm43SXhSS3RGVEV1d2Qwc080OVM1TTB3bGoxWWpnT1h4Y3V2ZmFBQVU4UDc5d2lvLW1xdkEtbGpmSXRmZUY4WEZvYktpTTlSWENEUWN2NDF2bl9CM0xhdz09
Same for everybody else: when they are powerful enough to claim them, or when we feel like giving them.
r/aiethics
comment
r/AIethics
2017-06-06
Z0FBQUFBQm9IVGJBakFZM2tYQjBDN3IwZmRMUXQzY2U0ay1xX0FuWFpEbC1QcUR6emdhWTdFQVltX0hGbUJTR1BOZGoxN0Z4R2JDdXMyVW1UbzdmaGMxdnJWRllXcVljZWc9PQ==
Z0FBQUFBQm9IVGJCbEJBaUd2aVNHc096N29YRl9PVlpvWjQ5TEMxdW9neFNsX3h1aDJfWmNQSFl2UGREY1FBT181ZUdNTzB3ZllScHdTdG12RXFXX3BBTDFyS0F4d05WMzNpZlZCWUo3TnFNNExISzl6bEhKNEVuempnRFQ5SFQ4NHdZOFhPZms5RnE3eldfZ09fOWhwTm91U2g3b3hjRXk2bWdwWkpXbVdRNzRrYmpTMW1TNVZYSEZCajBsOEJUcDYwMmJlSHRNY3pNZ1FZTVBtWFNHSGtFWFVKWGVMT3FEQT09
Because simple structured guidelines can result in complicated unpredictable results.
r/aiethics
comment
r/AIethics
2017-06-06
Z0FBQUFBQm9IVGJBby0tOHNzck8zRGpxLXZmVDBEYXNZeXp1dUo2clpMWXEwQTB0MFNVcnFCQmpTWlhrdWZ3al8zUWNUNzFQeXRlX1Y3eVZxbGwxRzBzOFZhMFZtWmwtdlE9PQ==
Z0FBQUFBQm9IVGJCekdMbTVrWnJhRUdWWG15LTNfelZpRVk2ZFlJQTY5cDU0NkFaN090SnhWaVM5RzcxYWZLYmZ0RDlSeXM5V1lvOEU2WXNKemVoc0pncTl0NjZWZmMtcTlpaDFxNGdUNWJrbEEtbTYxWmR5Z2VmTE9WS0hRbkxLNm9sVWl0UVFrMFpoT3Azd0RZY2RiNGJVeDJIYVMtd2w4MmlNOGVFUUdPTkFUb3laSmZMaXJDRy1mTVJlMWp0VVdHZ3lpV3dpR3VqV0NaT01mSVdFaC1lVUg3cG40aXpqdz09