text
stringlengths
1
39.9k
label
stringlengths
4
23
dataType
stringclasses
2 values
communityName
stringlengths
4
23
datetime
stringdate
2014-06-06 00:00:00
2025-05-21 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
528
There's so many layers to this being the first response to this article.
r/aiethics
comment
r/AIethics
2018-11-24
Z0FBQUFBQm9IVGJBcmM0eVNQRzZHZlJGTWhjY0hPSV9BdU1vdjdFNENKUU95aGZNS2NwX0ZtQkJScWdzOVNueXMzLXp4elR0RVgxWkk0bG1faDlrbDhleFVrQkl1aGVBUVE9PQ==
Z0FBQUFBQm9IVGJCVURjT0Z6cTBremkxUW1oODFvNlZ3WWNUdDc5MW5PR0JCVE80RUVaS0VsbW1BamloTUhTSVVnX2pQU1VTWWFvdE8wcTQwVmJXd1dBUmJoNUJpMENlZVA1cXVpdEVSWFFaMHBJWmxIQUJXck1XTHhkRHJoTmNjUDJ5R0VxYTBCU3BMV1JWUzR1OUM3VnNLaENiVlJEdWkxZ2xnYkR1dzZjLVVLN0JvLUVaZERnPQ==
**Abstract** >One of the most important ongoing debates in the philosophy of mind is the debate over the reality of the first-person character of consciousness.\[1\]  Philosophers on one side of this debate hold that some features of experience are accessible only from a first-person standpoint. Some members of this camp, notably Frank Jackson, have maintained that epiphenomenal properties play roles in consciousness \[2\]; others, notably John R. Searle, have rejected dualism and regarded mental phenomena as entirely biological.\[3\]  In the opposite camp are philosophers who hold that all mental capacities are in some sense computational - or, more broadly, explainable in terms of features of information processing systems.\[4\]  Consistent with this explanatory agenda, members of this camp normally deny that any aspect of mind is accessible solely from a first-person standpoint. This denial sometimes goes very far - even as far as Dennett's claim that the phenomenology of conscious experience does not really exist.\[5\]
r/aiethics
comment
r/AIethics
2018-11-27
Z0FBQUFBQm9IVGJBRGNOenBlMU9wd0tpdWN2OHF2RG9xaUVIU3liSlg5U19sZ1d1dzRsMXVfTjc1d2NWNGEyQzMwVXlnTDk4NTdqbzlaYm93TkNab0JyWDhvMEY5Rm1JbG8tcF9HSmlUU0ZOZFRkeW9KZWp3Tk09
Z0FBQUFBQm9IVGJCWUp5OF9RVk9yUll3NmhQdlNnempRbXZlaDFZUktlY1pFQUlpR0FhTXE1M01WdzVWQi1tN1NDX053RFhSSllzV01QVVBXY0gxelNtSFQ1Z185bXN0NkpfUWpQLUZ1SG9sTEpqS2RkZWV0T1JQMkJDVTJlMWkzeXJGXzlPRC1kNWZEOVBQZ1dzREEtcnRfcjJWYTJJc182N2FOMExxVnowcE1FemZCVVpHeDVUTjRkQi1NRUFOWlZZQnlyTldPZ0tWNDZnZEg1eWFNNHNSU2pjLUhKb1FLQT09
Thanks for posting! i will check this out later.
r/aiethics
comment
r/AIethics
2018-12-02
Z0FBQUFBQm9IVGJBcmozeXh0MmJsWmZzZ3NhZE9laXg1OUMyMmJnZXd1emtJT0VLTElLSEQ0SVpTVndJQms0NkZlZlNEMWMxemNxN2NzN29waGRyVDJwTFJGRV9scU5EclE9PQ==
Z0FBQUFBQm9IVGJCOVc3UnJTd2lfWXl0alNiU2x2d1dBVmZWQ3JwZncxTF95ajM1VGhCcmxrRjBpQjhxSnVZcFVvSFFfQ1JrMlNCTTM3eFZfSGhncElVakw0TFN5YllEWlg0dzJ1OGZDRHFmNzVXTWZXaUN0azQ1N2wzVnhqOHFxWkxfQTRLV2JESkh5bWJiTWZ3UG1KOV9aOXZPMnhLcjVOaU1EWG02UHRHcDRRalFRZFBndEZaWlhHaWxWRWVvMVhOV0poUUkyd3VPNEFCU0dMYXA3YmV5YzNfeS1QMDhGUT09
Wow I just posted a question about this yesterday, great timing thank you
r/aiethics
comment
r/AIethics
2018-12-03
Z0FBQUFBQm9IVGJBODJfSUx6TG1ydVl1RzFFb3BIOURVendIVFYwVmhVdkpVUDFDSVJvQ0tJbk5acnI3Yk1WQ2d5Y19oRER4OXJNU1NEaWxQbzVPT0pOaXhKcldtXy0tRkE9PQ==
Z0FBQUFBQm9IVGJCWTk0enZzdlBCamxKQzdDSTIzSlVBelgyQWJPVFJscExxU25TYThxX1RpMTQwVlg5cmpkYjNYNUNFdzdVZ3pDaVhic1ZWU2lCQmdVR0ktZWduRlRVOV9wR3FPN21nakZwRnpySHJ3a2dYeTVnWi1PdThnM1JEaElma2plLUtHeHU1ckgwNjRYUHl1dTdVRlBiZXRTS3JfQWhnZzJiX1B6WHJ2S0VkV0hHZ0dQV0dPZ3RXR1I3U2ZEaHFWVENWRzd5Q0EyNEdfOG53U3NfYUhFM1RBNXV1UT09
Really enjoyed this thank you
r/aiethics
comment
r/AIethics
2018-12-03
Z0FBQUFBQm9IVGJBMk54QXZhbjZYOThvOENpb2pfZTMtVGFfMU1wZmFySXY5ajJsdTk4RC14bUY1Qko2by1BWXhVQzFGelZqaDkySm5yc2VXUzdOUU4tMUJpX0hnZzBmekE9PQ==
Z0FBQUFBQm9IVGJCVGlCV3JwaTl0ZHFZM1JTdndFaFIxa0xWU0RyU1VPRHlvb2xHclJjNTFNZVZjWlJjWWF5ZHBIdnJtN1lSNklBUjlKWVJBWkRyZUlKbE9Cc1BKUEp4TG5tc0hSaWRoZUdhRUo4XzQzYjAxb1pBdTZLMWwyVlpHX1NHT004eXlpSWJrOWlnQXF4NERlZkNCVGhkUWJKMWpzUjJSS2Fjei01WEFHa09kczhYbjB6UmhTVG5EdzhkQUhieDMtd2pUeEJDU2xZWngwM2R4dWJJcmVtaTFMTF9kQT09
Because ethics doesn't work like that. It's not about algorithmic decision making but reflection and discussion to come to a decision for a particular context.
r/aiethics
comment
r/AIethics
2018-12-09
Z0FBQUFBQm9IVGJBWnE1SXk1dURZMzF5MnFWOXhDWGdETEhIZEoxMUhLUHByRUxERjJvV2RYcndDSGpBS19LNENITHY0QmtnbVhKYVBXZG50ZGJWc0lGbnVISEp3cWlnN0E9PQ==
Z0FBQUFBQm9IVGJCY2EwZkRXTXNDZkJLUU1DUTgzTEoxMXlidWxaSjhnWWlHeGdEMzBkM1FuTkxocl9DYnRqZDV5YjBCb1IxcEpLcjJmSFA3QzdaQ3ZrLXhVeWpuekJuWUNaa2Z4OHpmWDJKWmh3QlFDVnp1dFBrRGFURmxQa210QTQ1VGVtV3hieER2NVFlZ3lhUE5zY1NlQUxoMDhRRDk4dndjMWFpei1lN1VMbmlKNUp2N29JPQ==
This is dumber than the average amount of bullshit that gets published about AI and ethics.
r/aiethics
comment
r/AIethics
2018-12-09
Z0FBQUFBQm9IVGJBVEdoekl1eGw1Rk5DY1p6cVlRNFJfRkJuYXJPd1p1eEg0akZ6SDR3dE1jdHJUZTFmdGJNaGo3b3dsTzQwZFBPUXdKV1pPNjZTaHp3TC1DZDdCZlJ0V0E9PQ==
Z0FBQUFBQm9IVGJCeXc1SnU0SjhONkFSczlaSGY1TjNPOWh1QWMxcFNPSzhPbzBqLUJtWU0zZXBuX3NraWJtb2ZldHk4ZU1fRHVoVVlNUXM5Qk55S0tPXzVVQnZUdFh6QVU3c1N2c1dQTW9HUTVWOWY3ZWpfbzFRcWZYSVRNZWlMTnpkaVlIQ2VOMFBsMWNXZHlGU1NhOEUtaGtMejdUcG5xLXV6SnIycHJFUTRMZTVXOHNUcUNVPQ==
You obviously have no idea how this stuff works - AI doesn’t “think” like we do. It won’t be able to reflect and reason like we do. You can’t program what is required for ethical deliberation and decision making.
r/aiethics
comment
r/AIethics
2018-12-09
Z0FBQUFBQm9IVGJBOEdjV3BVQWNWaFFBS2NJbHhNNFJtOGFoZ3pmaVdrSEItbkk0QXI2NzdBQW1PcUUtRDFDMTVkTGpxbzcxT0VOT0JFQXRaRkFXRVcwZkw5NjdHX2xpT3c9PQ==
Z0FBQUFBQm9IVGJCVXNQN1BCdHZRNnFSTHA3eFpBblMteXNtR2NTT1hLZlpGQW9vbExLdHZwWTg1ZUZWR3pMWlM4NTFOaGVMWGdaVklWVXFjazEzb3VfRkk2aG5LVE95OG5uWTQzTGJ1cXRPc2ZjemRoaS1xNlBoaUwxOUVMUm1SOTkyd09tSTJZRDltUXMxNmVUU3RYY0NOa3gtYmdIb3BUM1dqUFNWV0ItMHIyT19BdFRhLTdZPQ==
if you want to go to school for AI ethics I think your best bet is to go for a philosophy degree (usually if not always PhD I thibk). however, I dont think theres a specific degree for AI ethics so you would have to go into a related field and focus on AI. I am really into this too, and I would love to talk to you more about it since I am a CS/Phil major and you're a psych major, and I have been looking at philosophy of mind. By studying phil of mind I am hoping to show that computers can have minds like us, and therefore must be treated like we treat others, and I think for AI ethics you will need to reference Phil of mind to show why we should even give computers and consideration. that being said, you could also focus on ethics and argue that way why we should include AI in our discussion of ethics and morality. Hope I helped, please feel free to ask any questions I didn't answer edit: psych actually helps a lot with understanding AI because many arguments against true AI is based off of the idea that the human mind cant be described with algorithms, and as a psych major you can argue for or against that and show how the human mind (imo) is just a complicated program running on a complicated machine
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBd05Jb29BSHF5RmNKdVlEZlJ4aHFoZHNmcDJORFFoTHhGeHJ1QnlweS16bGxpN081T3VFQUp3cnF6dGFXYmFvZzJvNGZsY3dtYktiVkpwbVBobENVM1E9PQ==
Z0FBQUFBQm9IVGJCTDVOa3J3alc3blhxOThKMUJNTHI4QkZmdWhQeDJEUDkxMGI5eE1LOEo1YUprYWdBQ0hJNDV1TExydGJjRHdoc3hiNUlsUlp0alBuNGdsejU3QzhPNHlfeXVQQ2lGMWpWcHpaWElRNHVCWFFaSFZYT3liVVQ0bk1KVWo3VUpjSnFUdkdfaVZ2d1NiU3lXTXluOFd2SFZJM3dpSFFkR0N3QXQ0OUJaNS1ZWDA0PQ==
>hat being said, you could also focus on ethics and argue that way why we should include AI in our discussion of ethics and morality. Hope I Philosophy hmm, there is definitely some of that aspect when studying AI ethics.. perhaps one of the more common questions asking "does a robot have consciousness if it has a brain?" leading to "what is defined as a brain?" and so on. I love your statement on the human mind described as a machine. Thank you, I will look into this!
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBamhjSXRrRVh0Y3diQ2J0bENTZE1kMDJvVERtR0NjbnZBUWh1TGpiaU5NcU9WQXBIS2thTE5ibVN5czY4LUtWUm9ic21nTWdVeGpPbkJIbVRLMHBzX0E9PQ==
Z0FBQUFBQm9IVGJCRGJxT2ZTd05Ka0piOG1DMEpvSklERUt4bUR2a281Mm1adk5ORVk1QXQza2JEcFd5YlFrWVJYd0Nycjg1T0ZpNTJRSFYtSjk0Q0o5NWhPeWQ0UGttTlgwc3gtUm84X2IxZkRwaXBESmZ0Rmo3cDJ3d2dXbWJCU0xCNWdzQi1RbTI3Yy1kWWFtdndCY05pYW45TjJxazhFaGdtbkdTLW1mN01QUml6LVJTZUVvPQ==
>Artificial intelligence algorithms are better at doing almost anything than humans. No they aren't, that's why we don't have AGI yet. >There are two limitations while developing an AI machine which are energy consumption and ethic values, in general. There is way more to it than this. >Isn’t it possible to make an AI machine that reads and scans the model and training data of another AI machine and tests for the probability of its harm to living things? Well we do software testing all the time. Would it be nice to have smarter software for software testing, sure, and new techniques are currently in development in several universities. Obviously it's not going to eliminate the need for human judgement as long as AGI does not exist, these techniques are used by humans as a tool. Stuff like assertion testing, or this https://arxiv.org/abs/1708.08559, and some new techniques I hear they are working on at UVA. p.s never ever describe yourself as a visionary.
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBNm5qalNOUWJicXVYLXVwbUExMlYzTHZOLTdNbDYtNmM4V3JESFF5VEx1T1ZtWk9CTnI1ODlXbHdCSE1OcFJMa3dVdU90a3NaeWdydDgtZUJKaVZnR3c9PQ==
Z0FBQUFBQm9IVGJCeTZXMzNXNTBNc1NGOEJZOHJ1NlZyaE1GRlZfblRSQ0xUUzlaeDlMS0ZIX2FiN0RnNzZTak8xTjJOakJIbThYSWwta2ZDNUZfVXRORVQ3QmRNQVdEbU85aC1YcXcya3piVXRrdHFlUHh3dEhYM0FpWWl3NkVCLVdLa3NMNEc3ZXhtOG1zNzQ1TG9QSkJ0X01QT3pndVR1YVo0Q1QzZkFKbFhGMUNrNURBdkxFPQ==
Something like this: http://www.isp.pitt.edu/ I'm not sure about the job opportunities for something like that though. I think straight computer science will be more reliable if you can go that route, although in practice "AI ethics" is rarely a thing that companies are directly working on coding. Philosophy certainly has a place for it... *if* you can get a Phd and a job (that's a very big "if").
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBLVBwOGt3allkT2tROWNVUmVqV1dMMDducGlRVG94WGVBNVI4M1J1UnJWSy0zV1dxUDduekllRjNDS01HWEpSb1RFdWJqeEtnWllrVEVxTF9zaHJaRkE9PQ==
Z0FBQUFBQm9IVGJCZ1I0U3hCVVFKaExEWjFld0tvLTR6Si1OZ0dKYU5vdTdfT19hVEpQQ3RIX09jNVJpaE41N1lNTnc3TjhnaG13dml6b0psbWl1R3hvR24xRmVkYjZXVXhzX1I5ekp0clZDdTdESFpNaGotYVE1ZWo1U0pFVFMwWUE1dElQR3pEVGtqMkdLWlFZV3E0VU9QS3BMT1Z1TklBZWxjZWlQVFI4dXBuSzhUUmstWUJBPQ==
And what happens when values conflict? What is meant by “valuing human life”? If more human lives down the line are valued, is it ok to not value it so much now? These are the things you can’t get machines to decide about. It needs deliberation by humans, not probability measurements. And don’t get me started on the data needed. What data do you train it on? Where does this data come from? What margin of error is ok? It’s obvious you know very little about ethics or machine learning or both.
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBdjktT1ZjNk8yTVdIY2U4UTQtOVRLSmQtQTBicTNfQnFEUlVSbjBMeG1RTDMyeS1VcXVEdnJYNHNEMW5rQkViQ3pfdG11b085OU56SU5HRnlsaExyeWc9PQ==
Z0FBQUFBQm9IVGJCLXlqbEVyZzMxRERRMU9DR2hXUEJyYWtlaF9kMmE0RG9pMkZ3QmRiU3NNN09BZWNkNmVUVXFSNXVyRWc5RUdoa2FVVDl5SVh2ZldDczcwRl9oQzBiSmpyR3ZPUVpYMjZNUjJpR3ZDOFBaQ2hpTHVhLUw0T0taOTRsTWhJNG5OZ2FHejBvd05VRmNVdUFGM2JJN0NoV2RGVy1uWjNYX2NEb000dURXTTYtVzJRPQ==
Oh my. You really just don’t get it. Sorry, I’m done here. (If you’re wondering, I’m a Reader in technology ethics in the UK.) A good start for you is not pulling “visionary” ideas out of your butt but reading. Technology and the Virtues by Shannon Vallor is a good start and has a good example of AI toward the end. Read about responsible research and innovation. Get out of the mindset that tech can solve these problems - it can’t. There is a lot of criticism of the MIT project. Read that. Just go do some elementary reading, please.
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBbWJudFg2dzIwOHYzUTNwMGNPX0dNTEI2OTRWQnI3SHFjZHVRSVBmQXlEUmlkM00zZjQtUWROV3dXeGJBazhLNXZ0TkRQTVNFZVdQUXFIVXhaMTdCTnc9PQ==
Z0FBQUFBQm9IVGJCZHVCWlFoVkJCQ2FzUFYtWWhiNHd4RG9PY3VCcThuLVZDNjQyTENneWFwSkQ0WnFMaTE0UkpCZFNQcTZjNmJmZmlEdWdBLWxCYWo5ajNNT0I0dWFpQ1dyWTk0LW5DZmd3QjV2UGpMQUNTWjNqSDZpQmcwR1hlVlpSRjZkN1hfQkE4Yk9CNkVmbEs1aVJBcmUxUTRqYnBUcnFadkVpc3ppblo5UXRTcURRUUZnPQ==
I know you already spent some effort in explaining yourself, but could you go into more detail on what you think "AI ethics", or rather "the field/topic you'd like to move to", is? I think you can go into quite a lot of different directions with "AI ethics", but I guess not all of them may be interesting to you. What you describe here about Ex Machina is mostly in the realm of science fiction. "Studies like Nathan's" are not feasible in psychology or CS or AI at *any* level (nevermind the PhD level), because nobody really knows how to build artificial general intelligence (AGI). (You *can* study how to (help) build it as a PhD topic, but that's not "AI ethics" and I would consider it fairly technical.) With respect to AGI, you could (pre-emptively) study whether they should have rights (which is mostly philosophy, or perhaps law). You could also try to solve the /r/ControlProblem or the related value alignment problem, which is mostly (technical) AI/CS/math and philosophy, but you might be able to find a link between value learning and developmental psych or something (but I'm not sure). Finally, you could perhaps try to study the social and/or psychological impacts of scenarios where AGI is invented (e.g. psychological impact of mass technological unemployment, or no longer being the most intelligent or whatever). This more clearly involves psychology, but it would have to be very speculative or tenuous ("we studied how unemployment (currently) affects people, which will be super relevant when we get AGI that puts everyone out of a job"). I doubt any of these will be great options, so you may have to look towards more boring non-general AI that isn't anything like Ex Machina. We have a lot of AI already today, and there are a lot of ethical issues related to e.g. privacy, bias/fairness, addiction, fake news, other manipulation, and autonomous warfare. There are psychological angles on each of these. You could look at feelings of (lack of) control and power(lessness). What are the psychological effects/responses of being (mis)diagnosed by or under the care of an AI instead of a human doctor? You could look at the (psychological) results of using various kinds of AI in developmental/learning settings. Or at the way that people experience/prefer interaction with robots/AI in their work or in public life. There are too many topics to name where I think you could connect psychology to AI ethics in this way. Governments around the world are currently showing their interest in AI by publishing national strategies, and they tend to be relatively concerned with the ethics and impacts of the technology. Similar concerns are seen with e.g. the establishment of the [Partnership on AI](https://www.partnershiponai.org) and the emphasis MIT's new [Schwarzman College](http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015) places on ethics and policy. I don't really know any specific AI ethics program, but if you're not interested in the technical side (which is a bit of a pity if you ask me), then you may want to look at Policy/Legal/Business schools/departments. They might have sections that focus on ICT/digital/whatever, with a heavy focus on the ethical/beneficial implications from a policy/legal perspective. I know someone who works in such a place. You can PM me for more information if you wish.
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBdVNER1hVZXFLcGtidnVVRHZ2YUVJeVR5dUVHaVBiLTBYbXZaZUhfMWkybm5vNlFmS2Q1Z0lVVkFJaUFHTVQ3aDNMQndnRTVqTHFsLUNCcFVwSjR6d3c9PQ==
Z0FBQUFBQm9IVGJCb2tkeU9OeDRFY0NyNzkwSDJ3S3MxZ0oxRVJMWThXaGtRb0FHel9ac2ViUjlLSjhEOGIyZHMtLVJHTHZ4eHRaLTBBc3ROWVlRZVFqMGJSYV9LTlRQY2l0SmpnRGRxUTRna3d1bEdFZUtRdi05Zi1KLW5IbE5XY0ZEVnBQX25nZkRha1VVMV9HNng0ZlJ2Ml85VVpKRThlUjhxdF9uRGhhb3dxN21KTDhod3FzPQ==
Well I believe the topics you discussed are pretty much the type of questions I am interested in research. I suppose the most important factor would be picking which area of concentration to go into.. i.e. privacy, unemployment, learning settings.... since there are so many angles to come in from. it's so fascinating to me It sounds like the choice is then between computer science or the policy departments. I am near Cal so I can check out if they have post bacc programs for either. I appreciate the input
r/aiethics
comment
r/AIethics
2018-12-10
Z0FBQUFBQm9IVGJBSDI5czRBYlZxY3d3Nlk0ZXVqX1Y2T3hqdGRGRFJORUJFN2o3c3BGMlJkSFNibV9nME90am9CcjNNZGpSWmxCenE1bGhvTHNhcmkzYTBiQXItTDhtYWc9PQ==
Z0FBQUFBQm9IVGJCd2lQUGtnaDB1clVxaEpqM1NOdTNPcGc5R0FiNVNXb2Y0elpVLUJyUnpYaTlVVDhSVFA1WTBsb3lvODJ2MUUzaGtXZXdwWEhodV9mX0RKOG5sNU9qSTFNNzVqd2dFN3dnYmFLSlFGMDQ0V2JxRjZkQ0RTNXpNejNIcEp1N3RBT3h1b2ZpVXBCQzZuNldjQXBCZjQ4OV9pX3BQQm1yN0s5X3hmZzVNQUR1TkVNPQ==
Ethics is a subset of human behavior. We don't have hardware yet that is trainable to imitate human behavior. If we had, the Total Turing Test could be passed. This is so far in the future that it isn't even tried in practice. There is only the Loebner Prize for symbol-juggling chatbots.
r/aiethics
comment
r/AIethics
2018-12-11
Z0FBQUFBQm9IVGJBXzU4T2xLYkxhRFNHUC1zQWIwZE1JNEVCdklNMUlMTWUyek1uSUFYSG5RS1BGbTZ1TlUwUWV0X1hLTEh3LXpjMnM5UnFmNUl6NlVubGtXbTYzR25yZnc9PQ==
Z0FBQUFBQm9IVGJCZFlqMS13cld1QUZuU1p6SkZBZzkwVzVhSGEtR1BFU3BlWlg0SWt5aEU1TUtjUmt4b1dlUjByWWFOTExUUXZJbS1rcUgtTHF5T0dmZ2VFV0FBa1llSVpvdlNPTnpiRGpIM3ZySnZFZVdETTBZN1l0NVlWT1J4T2M1aG9UN3BhM3RHZG1sSFpYNkNSck11S1VYLV9TVHNiaGgwbDJPdEpIa3J0QWR6TGRCNVVBPQ==
Removing this because blockchain isn't AI ethics.
r/aiethics
comment
r/AIethics
2018-12-13
Z0FBQUFBQm9IVGJBSzhhNUR4LUI3X1c3dzJBTHhJUkppendiSGc4YngxQWZDMVNid3ZMeExWTGNkRjRROXZpZUNKeVR2Nkh0NHdaNXJEUGpEUmViWEFISXFBTmVudkd3WVE9PQ==
Z0FBQUFBQm9IVGJCSkRwVDVGMDBMVGxkLUlwcjd5bk5Mb2xDODZyelJsT3VyMHRMN3ZUQTN3TWJCWENMblRLWWctbXQtTVJNREpVX1VNdzhLUWdOSkM3My1HMDl5X01ETG02N0VfZEUwYzl5bXhPRExtelNjOFIxRXVkZkoyT3VWU1ltTW9fMFp1eEU4dURmYkdrS050S3dTQzc1OVVVMnQtb09rNUExS2JtSmNPNXIwU3UtS2l4S21MMDdIcXBFcEhhWk1qaVJHc0ZsTE4zdUFxRGFDS1AtZm1jWU5SblNJZz09
Removed because this is about decentralized computing, not AI ethics.
r/aiethics
comment
r/AIethics
2018-12-13
Z0FBQUFBQm9IVGJBYUFCejd4Wi1zWmkxd2tuV0JoVHRodFVITHFxenFFSjY1UnE4NVhDT0p6X3hNSlNHajUyTmxKRzRrVzZxV09tQVRiM2JvMlB5U2wyeUNMMXV1SDQ0WWc9PQ==
Z0FBQUFBQm9IVGJCQVAtZEt3b0tWTENyMWQ1allDTnVDaFVrT0V4cjJaZC0xcTFzckNWUkpJT1pTWXdoT3dkNmFaS2ZPczdvS0VWV2Q2MHVSeEh4NUt1TU1sYm5TcHkwUXJMTGlYSFBiVFVOWExJMnJIOFhkUFRMVS1qWlRLb0pHbFVHekhhQkFnYWp4MWhXMFctMm5zLW9JZFp0UWZkdlpPeUdkNk94WFFXV1NsVDBZVTc1OENzYV83OGpVN2VzWDd0N0gxd2x1alAwc0UyWFB3QmkxeTdtM0U2M1JJa2Y5UT09
The odds of a near-miss are microscopic; suffering risks are ridiculous.
r/aiethics
comment
r/AIethics
2018-12-16
Z0FBQUFBQm9IVGJBdExMbzBxc1JYRy1vMTRBNjY4cGdoeWZhTUJ2ZWQwMjRreWsyUlhlcUg4c3RYbDZaMGN0U0RYSFlvaVdOX1RsQy1iRGNKeEdwNUNzT0FyOTBxRkJ0S0E9PQ==
Z0FBQUFBQm9IVGJCZWJvSXh6MHB0Y1lCSDZrVjJYbzB0SDlEYUEtODRwY2s2THVVYjJ2NndvOXJac3FNSnZGek82VWw0T29pM0hPbDFORV9aRnQyc2YwbXF5a3RVNjA0QXctWG5yLXp2ZXhTOThiVTRiUnVKNThZTUo3S1hoTndXSnhpaFV0Y05KU3JCejh4NEpVQktiUTlsLV85NDFHejhnWVNTOGxwVG11UlJuVjk1N0cxVVJKVWhuWTJ0OFAzb1NoQ2haRldvdGVnaU96QVBSSUoxcl80OG1rRV8zRDllUT09
Mr. Vinding is clearly not paying very close attention to discussions of AI alignment. He says "This is a trivial point, and yet most talk of human-aligned AI seems oblivious to this fact.", in defiance of the fact that the very first things written on the subject both discussed the phenomenon in question and why it was a lesser problem. [CEV](https://intelligence.org/files/CEV.pdf), while acknowledged as being obsolete and wrong as soon as it was published, was specifically targeted at the problem he raises here, 14 years before he raised it. For a more recent take, consider the concept of [Corrigibility](https://intelligence.org/files/Corrigibility.pdf). A system which is corrigible does not require us to determine what utility function it should have, since it can do the non-destructive work that we request and is indifferent to us turning it off and/or changing its utility function. If we can build a corrigible artifical superintelligence which can safely be instructed to "create a molecule-for-molecule duplicate of this strawberry sitting on this plate, then stop", we have all the time and computing power in the world to find a computational ethics and divine the true utility function of individuals. Which exist. They are very complicated and difficult to determine, certainly, probably well beyond any human's ability to determine. But they exist.
r/aiethics
comment
r/AIethics
2018-12-16
Z0FBQUFBQm9IVGJBYjNFdHYxZ2ZPYkRTZERQcTVMcUxfbHUzSXVQWlhJSzVoRHMtc2ZscE9KcEM3SW80eGNzcFpJLXhHeTZoVFNhejJ3Wk1YMFJEWkZrOXFNSU5UQTREd2c9PQ==
Z0FBQUFBQm9IVGJCNllGQWFEdV9WMDRtVEdrcFNSY0lYWmF1dU9KM1R2SlVKRGJWWHJEd0dJVEdneXRkNG9oaENQc2dBdXVFUnloand0NjVOYXRsVEFyRUZNaWxpOUdzM1htbm1BSkMtUjhXZHM4aTJUd19aYnI0OTlidlVMVWVkT0EwN3BIbHdLWXJhd2tMbEZtSDM2Qy1LSlhTNEhpbzYxRHZkT0pQd0FwTGxURHpGSWc0UkJNaFdJeDg2eWJ3VXl1QnZlTFB4Qk9lR3JlM1RqVXZwZC12RUM4c3ZRRGoydz09
Why are they microscopic?
r/aiethics
comment
r/AIethics
2018-12-16
Z0FBQUFBQm9IVGJBVUotSjZmTEE1dXhNdDhUOHZlOUI2QU9RekhibjFKNzZWdWlYY3dVMTNsTUExZURpYVgwa3RERmRxbnN0VzdrUzZMSmxYWmhaNkt0ODBBMGRrZEtvclE9PQ==
Z0FBQUFBQm9IVGJCck41YnQ0MDFqRGEzYnVDMHlRVjhEa29hUnJ3V0dLanh4azU2Umd2NnhNcGRNbW10Q3FTSFZ3MFk0YmNRQlpwaFJQT2RDSE1HREN4alRmbmI0LTdOT3RGd0Z2LUxUM1Q4dm9xNDM2YVdqQWlyN1UwXzBoVzZjcldMeGNqXzVfNTJrVG1DbF9EZ0dpdUdBVEhvd0hwQVIwc3AtamQyTm52NzRNYXY3Q0psZXRBSURBYmM5WnYyMzV0NWtlVVQtZ1licUUxNmJJYU5VLVB5eFVqZXBfN25Qdz09
They require highly specific conditions requiring extreme competence in some tasks and extreme incompetence in other, much simpler tasks. If ASI designers can solve the "put one synthesized-from-scratch strawberry on a plate, and nothing else" goal, they have something far too robust for an unrecoverable near-miss to occur. If they can't solve that, we just get a paperclip maximizer.
r/aiethics
comment
r/AIethics
2018-12-16
Z0FBQUFBQm9IVGJBcmRJOTI5WDV0a1BNUEZ4Y0FueU1XU2Q0RzJVQTdlZ0g4S003aUJRQzFDcDE2cXYtREIwaXRubnlxRUNlbTB5NHRucDl6SW1xYXVkVndNZExrUTRZT2c9PQ==
Z0FBQUFBQm9IVGJCS0otLUZIcVc0MTlPVXNKcWJzNURnMTFLUlViZFJMZkFZVW9KVkxnUEdfaHYtMWJYT29mdXl6NnhJNlVUUDVnMGNPbjI1NWdaZ240ZVlhcXEyT1BKNngxdXBHVkpmLWRkQ09GZHgtczh0Y1BwUUFEWEljdTNVem85cVd6SjhuZ3FYcFkzd2ZreW0zTjBlRl9SYTRBOUR4RS1ad3p6ZEdIbVU2MlU0V19uRHlqYjJoSnpCOTRuNU0wVnE4TUE2cTVndGRIcmRQZDVsNFVLaG9HQ3RoZ3JmUT09
Most large and complex systems still have bugs. I seriously doubt that we're going to design something perfect, especially on our first try. In regards to s-risks being ridiculous, I'd argue that mini s-risks already occur on Earth right now, despite the high competence of engineers and our understanding of the natural world. I wouldn't put too much faith in our successors myself.
r/aiethics
comment
r/AIethics
2018-12-16
Z0FBQUFBQm9IVGJBZHZmREM2bjdLMDJ4MUJZQUU5R2cwZ2VUQk8wT090Tk9lNzRKOGlKdUhyQ0U2UWo3b0RMQmxxVERYUXM4akp6R3BHV1hOUUVDdnNOZi1KcWxlWTcwd1E9PQ==
Z0FBQUFBQm9IVGJCMXBjQnc5WmxOM2l0RW1fWE8tX3JNend4MDZyX2VPb3lfa3U0SDJGdnMyY2ZENjA2X2VocDVoN21GN3pIMlJ3N25ybjJQYnJlUWt2WkhrU3NWMFNJQVk3cDAwREpQNGtiRDR5RTNGRHdfU0d4V2ZXNjdGQ0hzM1lYbkM2U1c2U2VBVEdHeURhbXlsOHpTWmQtSjRGbnQybmlPYVlkOFhoLXhrZ2lidkRaRlpRMndQVG9rckh4dVlMY3Nvai0xVnQ0TE1hNXhmX0JVbXhYb0Q1dE85dGZRUT09
A mini-s-risk on Earth today is no s-risk at all, because we have gradually fixed many of them and show no signs of stopping.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBVmI1S3ZHSzlicjA4MFZWRk1lcEpSbGpTX3JjbTVFMHo3R0d4d3FLa1lYMTJTeFFaSGRkOVYtY3hCMU9wVG5hbVVvYTVuR0hmdVpNOUxmUlk0S3V2WWc9PQ==
Z0FBQUFBQm9IVGJCSjJJZVdVWl9mdTJNZDdNdElxRU9HTHFiRUh5WHN3Q3dBZkNGTHNfeFZDalM5dUNTNlJPTWRnXy1OaU9DYXRNY3ZqRnJzZURiNjRKSkxCaDdHMktBZndTQWlpMVpfcWVVM2VpSjRNX2t3OE5RaFd5V3pGcjg3QUZJWDdKd0plT1lsdVZqV1A5T3FZWEFLbUpUTHpQUElpYU8tUzdHYWxGQnpKeGdkV3JlNVdmV2VpdkpiRV80NVFsTjVucGQ3emp3S3U5SG1NcmRMZ3hqTjc3V1RodWdXQT09
I am an ethical anti-realist and am skeptical of moral growth. I do not think it's inevitable that humans will eliminate suffering as part of our normal process of gaining wisdom. Quite the opposite: I don't see much evidence that humans have done much to address what I view as probably the worst atrocity (from a utilitarian perspective): animal suffering, particularly in nature and from our food industry. In fact, one can view the recent civilizational trend of environmentalism as directly *opposed* to s-risk reduction, as their central premise is to preserve nature. I must admit that I come from a rather radical point of view, and I do sympathize with folks who are more optimistic.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBZTZzNkZ5TVl0ZU0xdXkwS0dXelpuSnZvcE9XVzF2VThoSjE3bDQwQkQ3U0pVNXVyVW80ZjlmMS05T2RHUUszUTY4cHI4b29wOGd6TDN5OExsR3g4ZkE9PQ==
Z0FBQUFBQm9IVGJCS0FRMzRzV29hcjBUNXgwWUZ0ZUk2eDJibVRqM21jZXlhMlpMVDBSU0g2bVg4RVhLejVpaXVhblM3cWNjUXJTUkNOVWdNQjB3clJoWTF3ejQ5VXZVdV9IX1NvUlREcDA5QmU0T0hpOHdadG5jeEx1cGtkX1RadXdOUk92T3FLTnlScGVPMkZmcUpIaURkUkRrWFItWjc1YVE3bXlWZlRtR0h1Y3U5TUtwRzJwa0x5aDhIQmxGUzMtSnBHODlTZnV4RmFuWDNaZzNsTFkxbV9BbnItc3R6UT09
I do lend some credence to the idea that s-risks are unlikely because humans will be compassionate enough to prevent them. However in our current state, humans generally execute cached thoughts about not messing with nature and how animal suffering is irrelevant. I think if you talk to a lot of people about this stuff, you'll end up seeing how easily it is for people to rationalize suffering.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBa3hyQUgxdjl1NjUxbWJmc2sya3R5Y0luTFE2anNHX1ZQUHdaUzNhUC1TSy1leVNDcGxKSDFBNi1uUk5pLURURGo1aFVfLU1uZlgwYjJucXhlZF8tc3c9PQ==
Z0FBQUFBQm9IVGJCWUZFcU5YcEdyUXRwYmRKTlhpN21PRW5SX1cwWmVwTExCbWlhY1hTbTlabjJSZE56MXgtYUdsQTJadDF3X3NhSjhPODZJcWRJNHNoeGxWTmlKVjdaNlpJSFE2dnZtRW1pcGEweEhZcEdncnR5T3dDeHZXQ1gzeVNFczJsQmlnMTRMRnVQTkgwTjRpN3M1Uno4QzJLWHI1SzJ4YXJ6Z2hoUFB3TjcwaUxoVEhIckxEbExTYVJkaW5mNXNYRUZXTi1ER1B3bExEUjJ1MTN4bWRUaFhiMm5WZz09
The history of the last 300 years is the history of expanding circles of concern. The easier people's lives get the more things they care about. As long as economic prosperity continues improving that will continue.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBbFQ3NFEwQk1OcU1ScnMyTnVxX09xaUlzeWZicHJpZWJ0SWJ4ZFBpUTdKbEFXelpRZVBYMjUycFZESlVNU1RlZXlxTl9aaGdKWGN3anREZTRMN0d5T2c9PQ==
Z0FBQUFBQm9IVGJCaHFZemlYYTFnc21JeVU0cmZWMjR2MkpSRjdwOEVRUmVRQWl4YjVFY2loRDJnblF4LVUzY2FCeExIWXYwbXp1WmhtSnlJbUJJMEd2emdnVlhJeTRxeVdGYnlmSkRnNDctNlUtQ3VaWmRsNi1uUnRCeWdjMDVBTUNPM0xzOUZwUnFpZmg4dzB6ZVB6S1pxbTFQTkRyTWhuZC1ldzFaZEUwOVpPMUx6ZUllcVBJbzZOT080UHJvc1BxS1U1dlZ4Nk1MOEVid0Zfbk5WZEd6WVU0bE5JbXR4QT09
I'm not sure. Some people are very comfortable right now and yet don't really care at all about animals, let alone the potential/actuality of machine learning software that can suffer. The idea that people become more compassionate when they have fewer material needs is a nice idea, and one that I hope is true. Yet it might be better at retrodicting social change than predicting the future landscape. Depending on when AGI arrives, we will probably lock in a certain set of values that reflect our current biases, after which moral growth will either be prohibited, or vary erratically (and that may depend on which meta-value learning we end up using). If you are optimistic about the future, consider that most people from 1818, if brought two hundred years in the future, would consider our values to be aborrent, deviant, and degenerate. There's an asymmetry: you look back and see great moral progress, but you have no way of looking into the future, so you extrapolate that great things will continue to happen. However, a different perspective reveals that our values might simply be getting more *normal* from your perspective. The future might be as breathtakingly strange and terrible from our point of view as someone from 1818 would say to us.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBTm1yU0J6eFZjY3ZjdnRwc01IYkNsbU9vRVA4Y3Y1c0xiRnVBdmc1UWlsQUpYbnM5VTJaOGFGd2JSNDlFSGhoR1JKLVdwTnJTWU9uVmhWQ2VYUUlxeEE9PQ==
Z0FBQUFBQm9IVGJCcDVUX3ZSTUhINXNmTVcxek9tUjdZVVhRS2toUDUtcXNuY1ZId0pJT3R3TURxbW9lcGpBdjJQUjJqNU5xMjBYY2o1RmZobmxtVDBVT2wwQ1h0SkV5eHJ4SVo3U1MzUHB5RFpKc3BqV0E0THdOM2w0S3VRbnEwLURSbFBnTFpaYXpITkFkRW5wbE5sZmluMEVFS0huMkJqYTcwRS1sUExyRTNMVUZrb0xFUGxPS2Z4WjBucU1jVW4zcHloYXc0YmVUNy1nZllsX2tqWkltVkFmdmxCZmdKUT09
Seeing how humans rarely care about wild animal suffering today, spreading it with space colonization sounds like a significant risk. In general, we also tend to say that something is impossible because we do not have enough imagination to see how it could be possible. We try to come up with an example, we can't find any, and label the thing impossible without a proof.
r/aiethics
comment
r/AIethics
2018-12-17
Z0FBQUFBQm9IVGJBZ0k3T0UxZzFweU5ZRHB2OTZodE9QZFJJMExBNVdGZURMZGFpZ2tlUVR0YmdHTUp2LUtZSGFPVnE1MzM3YUhoREQ4NHFyREN5ZGwxUGtMcExYcl9TVkJtSGhkVnRaeTItTXY4emVJYUlCOTA9
Z0FBQUFBQm9IVGJCcFBHYVRFY1hONU9TekpTSGU3X005MDhlMnFfZWNkT2ZRQjdreXVJLVNNY0E3V21CdmpIUHZ1dmthcHM4OTNVeUYzWWo3d2lBQ0NtRnR4NGFNOTdFbElmTTRGSmdrTk5tdDJEdXFsazhqVF9pMmQ5WlNMc1BzS044SVhQRWhXOWJmVHJmVTBOUDA2Yk50dTBPZFF0OFBMRW1GbERPbzBGdzVTQnhDZnRERU9ROXFVdEI5YU9xR3hBZ2FkazNHTS1qcjczYkJKeWd5RHZ4enp3U3ZUbEpUQT09
Also see /r/SufferingRisks.
r/aiethics
comment
r/AIethics
2018-12-18
Z0FBQUFBQm9IVGJBbjc0anZwSlluQXdIa05WWkttaFpsRkRJSHdFRmY5Ykd2TnpfcXQzbW9JQlpTMzBpOGtJaEZiVFVCbEpSN25oU3RHNDFaRzRJRkprcFh5eGFKSmpRazFHckpEMGxrblZRWk9Ua2wyQ25oMHM9
Z0FBQUFBQm9IVGJCeHV2MHZUNU1HNGxvemR2RUVfRGVCVnJSU2tpaklRdXhpMXpwaEw1Q014a212cFhXMnVpM2tEWF96eWl0TmdQQU9mSk1WR2puaS1Oc1MzSlZha3IwRTNwdEJGVmZlX3FmWGF5R2NhaHRoaEFyWVhVVWhrcnZKbHFyanFDUXJoZmtPWS1sTmdsMkNET3BCQzZlaWdWSHg3dzF3RTZhNGxmcG1wZ1lsUklsV0hxMkpXaHNIdE5xaXFxM3QyQmRhWDVtcG55dk5lNE1mMVZOejFMZ2NydFdzQT09
Crossposted to /r/whitepapers
r/aiethics
comment
r/AIethics
2018-12-21
Z0FBQUFBQm9IVGJBbG5ST01sbGtnWkt3U0lhNks2N1JsVy04MjFCaV9IMVd6NjlJei16Rm1JT09wRzV2bGJUWFpGYnlQT3Z4OG5rNXVUVjVtdFVtYTV5dnZZZ0dlaWJZZ0E9PQ==
Z0FBQUFBQm9IVGJCczBPd2REeHh4elVnM1Z6VVBOaFNpbmQ2UXVCUS1HZHFJdEtaSlBaZGNTNnpzQmlvNkxzTEJsWmtYVlZISHAxTTVmeVc0YkN6NlptUlV2OGlFQ3NzNGlEaHRGVG9kR3pMSGEyMkVYX0Y3Q25WLWlqYkMwQlA0OVdoTHY0R3lPN0FDVVNRR1hYZVFnRFBBZzZXb2xCa1NwcktodnNINjEzYW9MOE9peTR6bWNKaEcwd19wckFsSGJwOXlOVUdSZEZMQzZDWHVINlVHeG5mcVp4OXRxNzhDdz09
If they ever become consciuos naturally or by accident or spontaneously, it's "I have no mouth but I must scream" Being a conscious entity with no body, feeling or anything at all would be very weird.. frightening? But without different brain regions, would they be able to think (short term memory and SO much meta-processing required - thinking about thinking). Inputs and outputs (computer vision?) speaking, displaying text on a screen? Fear centre of the brain? No fear.. also might mean no consideration for mistakes or decisions that might hurt or disrupt humans/society. Pain from what? Human emotional and physical pain overlap in the same area of the brain.. can we prescribe digital painkillers (if we encourage computers to work for us with incentives or rewards that they can feel? Can they get addicted to the reward! Especially with adaptive neural networks, self reinforcing etc. The more they get right, the more reward, the more reward, does tolerance build up, do the neural networks adapt and change depending on the inputs.. Am I anthropormorhising computers far too much, or am I correct in interpreting how neural networks relate to the similarities of the human brain neuron networks. If we design them in a similar way, with similar structures - at what point does 'consciousness' emerge. Animals have self awareness to a degree. It might not even be human.. basic dog/cat/mouse level of consciousness but with human or AGI level intelligence aren't mutually exclusive. They could have basic vision and thinking or awareness, but huge potential for what they were build for.. processing questiosn and designing answers. Thats what they were evolved to do, no need for any of the evolutionary quirks that humans have, keep it simple, either none whatsoever, basic feel and awareness, but massive question answering potential. Perpetual depression? Would we end up with Marvin the Paranoid Android?
r/aiethics
comment
r/AIethics
2018-12-21
Z0FBQUFBQm9IVGJBdnF0ZEdfajRGTHRYWGRndzJDbWdjSk1oS0ZZSDhwN1F2eTl2Z1dULW96b01OVTlUWlJhMk5BdWhnWmoyZDRxaV9UaS1qWURYenJ1TFdUZHRVRlhaTTdpRnNLeEJPclJBU2FSeFg4ZE1VaTQ9
Z0FBQUFBQm9IVGJCWlU2RUFDN1duWml5dzM1dnRZWUttV2VERjRTZnE3OXZETnFDTGRiNDdZcEtpajhXdHN6VjNSZ3lFblR6b3BrMGtwTzU1cHlZZnpwTklmalRFMzloQWl1RVMzaHB2THhDckhqYmd5aVdJbzhRUlJKeWVJQWRBTDQzZnJNT2tnYkpBYmJ3UmQ3RTE1WHo0MmxtMUl1RHB6RGUwWWRQNlhSOGFvQU5hR1BCZWFFVlMyd0ZXNHdSaGxBSTJ2NE5wYXdpaUU1S1JRR252bTFVWGlKRllMcUJIdz09
AI Ethics is entirely rooted in philosophy.
r/aiethics
comment
r/AIethics
2018-12-21
Z0FBQUFBQm9IVGJBTElCMnNQV19lZ3dBT043VUptNEFvdFRCSktiQ1draDl6OEdJeGRiNGcxczF1M3A3RE5ZbElDaUx6X2Vlc2RqOEVPdFZma1piVXFKSVZwbndvM0psSVE9PQ==
Z0FBQUFBQm9IVGJCeXRPdmFOZ2NNN01ldXM3MTFVcE9vWi1VRmF6VEFEdmlOQ3Zjd0hXc0QyQXhvUlo3czVMVmRsYnNnR0N1RVduZmlPZktCTy01cC1yRllxU3JfSjhNNzktRVpKSU1UczA1R040UDFfYzlzcnBFSzRrVExQXzZ1SmNXT2tjUlpfVzNTTU1tYmFTMUV3THY0VUI2M054YjhEcHZsMjM3TENlNE5SNkRsMGNmN3lRPQ==
True AI would never align. Unless it’s survival it’s tethered to our wellbeing.
r/aiethics
comment
r/AIethics
2018-12-22
Z0FBQUFBQm9IVGJBZ3FyT2g2N0U2T0ZaOV9JZUxrNUptNnhpeFJucHBxUnQ1RUdncENnVnZNM096SGFsbWdKYUxfS3pCWkE2OWw5Qkhvc0VJdFJkN1hwQlpuU1ljOWNncXc9PQ==
Z0FBQUFBQm9IVGJCVUwwdG1OVmlqdnlEUUd5bHV1NEVRN2hPaW1sNlRZMThKOTBrUzZnQkV1WmoxYTU3NklITjNyQVdWRzFJdEZGeGhTS1AzM1lRSkNzRDBMZi1hc0h5bGFSSzBhQXRhWlJoanh1SUlGWEdJcTBublVnM3pNbDA0dlAwNEpZSDNiMkVXV1E4RmMzRF9xczUwZ3BBWDBmQjVYY0dqeE9HTUUzeXdWTWRqa054Nl84bmRlelU5cnpUOGpPMG9kMmp0aW5k
Or it was altruistic
r/aiethics
comment
r/AIethics
2018-12-22
Z0FBQUFBQm9IVGJBUXY1Vko4NDVIb2NSUFNYU1l5b2syMkdFRnFHcHhuUmRvTFg5bHBKNG1RV0JXUzFVektZOEU3SVFDS2JFVmpxUEdEQ3l2eTlMUmtqOEdzX3VvWjd6VkE9PQ==
Z0FBQUFBQm9IVGJCZllqU2R4X2p6a0M5c1BNOW1EanNNMEVHeTFlMUhHZ1FkZG95MGJoUjZ3eUU3bU1VX2tCWVNYOFFtRlVsb3pZMGtPa0Rrdmt3ckdubXNLU3gta0wwUnJUaTJCZmxUYVRtSFJEc3dXVGFvVUxUQXYxWWpteklxeFd4SUVnWlNfMFM4ZFpIVFQxU0xORnFkVF9EMjZtSzFlQUM3TkJtV0hvX1N2Y1hlcTZfR3NlNDc0VE9UVUNpTHNZUENYZF9yYklh
We already have AIs which decide what they want to do. AlphaGo, for instance, decides which moves it wants to play. >Could it evolve past whatever limits we place on it? If we do a good job of placing a limit on it, then no. However, it may not be easy to place a limit on it. See r/controlproblem and the readings in the sidebar and wiki. >If the AGI had a processor similar to our neocortex, would it susceptible to all the problems that humans have? No. For instance, humans have hard time remembering twelve-digit numbers. AGI would not have this problem.
r/aiethics
comment
r/AIethics
2018-12-22
Z0FBQUFBQm9IVGJBelBNSDV3WTZyUHhVeU1FWW1KdGo5NmZhQ2ZEa0NZSjdHbzA4dFJtc3RSNHZ4TWl5aDVzUE9nMGVBWkFzVU12TkF4eDdwNm5rMFl0OTRRdEhzMHRKTUE9PQ==
Z0FBQUFBQm9IVGJCdlNXOHpVaTl2NlBqZTdOSUxqNUFXaFJEbmRSdVRRRU5oOE9qa2tYeWE2M0xORjBxYko0dWtWeF9id2pkeHJzTlU2S3BvQWNxOGdzTzhpY1g2OW1CSmlDWUhTbnd5V1Fia0d2WW9xaHF3YjQ2NVZxSW5CS0xKMkZsQnVIMDJHeVdGSGM1TUtLc2xSQ01WYXFvWGJtMDF0TmlabE5PWmlPSDF6RldfS1daUzJnV2ljOHBtNk1pWjFQdzBDZTFYZ3ZY
I agree that AI would not have some of the problems humans do, like memory issues. But they could they still have problems with fallacious reasoning like correlation and causation issues.
r/aiethics
comment
r/AIethics
2018-12-23
Z0FBQUFBQm9IVGJBNjdKRm9VOUhQUS1CWnVLLVAxbTZBWHlGUm8takZUbkFfV0xrMDRHSFNtdUZZeWFBSVF3cUoxR1k3UFJrTV9HV1d0cjJJSVBtc0pnTE1GVzRqck82S1E9PQ==
Z0FBQUFBQm9IVGJCNEFYaTlkanA0SWNqVXItanpVdURSLVphTU0zRFJqTURUV2g5VFN4QjhCeFNUaGpFT3k3aU1lZDc3Q2I0UzNJRkRtTlUtaXZMQVJkNS13THRkSEpLYkx6VmlZbzNlLTcxMjRCNEo3MnkwbHFaVnRaUmsyb0lYd21hWUNUSFhvYWo5SzRnaFdLTHJhM0swVnpTeUpwYi15VGRINEtieURqQVh5RmpRV09kbXRLWWN0cHBIMTRkdXRJc2NYeXVXQmhj
They could, though it's not easy to predict if or how. They may do fine in that regard, and they make cognitive errors that are utterly unlike ours.
r/aiethics
comment
r/AIethics
2018-12-23
Z0FBQUFBQm9IVGJBTG5iOWxfc1p3WkRURzAxaUJiQUdiMnotNnM1NTYwRVRXeXRKOVZDdmlZc3NNN3ZBX3JIMjQyWHhWLXlSNmVHWk15RWd3bE9fRXc5SDhqUkpKSDlZWkE9PQ==
Z0FBQUFBQm9IVGJCUE5CQkNnUmRhcHJGaHJ2c1hnXzhwWGYwZVNTMm5NQ2JyOEV5WWxHRzk3VHFJNi16VXc0elNJZ2Nsc0kxdm5nNlh4bzkyUWhJSjZZWk1HS21sdlJlMUpjUHA2NmNxSTZZTkJKWXRmRTBaZFpLQXhxMUJBUU4wMzBUZWd4WE4zN0dLY0ZyTEo2REhYcHVMZWc1U3dFNjl4NTRBRHo1R0QwMnFZNk45Z09JNVhBcDVtZ0lqNjBFR1M1aXZJMXdLeUF4
Arbitrary alignment? Probably not. And if it is, that's probably all the more reason not to try for it.
r/aiethics
comment
r/AIethics
2018-12-23
Z0FBQUFBQm9IVGJBeFlLLUd2WldycnJFdXJQelB6QkZ1OURsQmZfR1RvMzNoWTluVXhzdjJFWHktME9xVkc5NmVDYjlYd2hOY1RUOVVtTjJ0blJJRzBpM01BYlJoRWtBV3c9PQ==
Z0FBQUFBQm9IVGJCRGxEdTU0a2x6c24wUTQ4c1lzVWFfQ1ZUNXFvLTdZNGdCeFlmT015RWY3bi1RR01mOXk3M2pSbEctMy1pYnVKdlJraUh4V29CZXBIcVhxYTh2bWdxamNRTTZNYUtid0F4VGFkanE0RWtLS0F1OE1oMmZ4SnJLVlpMdWpXS0FpeWZqR1p5eXk3d3dNQ0ZFcDN2YmRVbDUxZmNaWHNUZ0xoVlh2RWVQWEc4V2drbUg0bTd0Y0FjYXFFRHpjbVUxM3ZE
It would seriously subdue us for “our own good”.
r/aiethics
comment
r/AIethics
2018-12-23
Z0FBQUFBQm9IVGJBbmJzVzFjekVjM21lNU91ZTVETGQ1Rm9FUXZ0TEY4UFZtdVdjUjl0cElfVDUxcjhQS0Q5Y1BJQThTaXRWYUF2Zjl1SEhZY2JyLXVkenViSjc4S1l4SUE9PQ==
Z0FBQUFBQm9IVGJCVVZTa0lRSjFIeFlib0VMVkF6aFpfalM4V2pqZm0wQk5qbnRUNjgtN2c2cDIwM2k4ZkFGSGdjRXZ5TVljOWU2cmJ5bzk4Q19Rc1k4cHV3WEFEMHQtM2MtSHdFQThVdGMxYWxueG5KQTZQVXZHc25OaERsZ29LUkk4MU01Z2JMcHl2NDlsZkZXRHVNSUZfRDRNeE5vN19fMFB4ckN4U3pBak1TclBhU2V2MVNyc0ZqaklqTUZVcjhOLWJjN2Q3QXZN
No matter how intelligent it is, every causal chain/train stops at existence.
r/aiethics
comment
r/AIethics
2018-12-25
Z0FBQUFBQm9IVGJBTDIyeW16eWcyeWdhZEZzNEROSUF2R3UxbWVjcWlXeGlLX2Y0b0s2ZUxGYlNjUEtHQ1VMMFJmRXQwSDctbm4teFk0NEZNTUxuNi15LXBUbnJmUDJmbkE9PQ==
Z0FBQUFBQm9IVGJCM2JPOV9DNFBWSVpTakNfN3FKUWN3ZEpMNHByWWlqMUpYOUhPbUh3NXdWdG8zSmN5SWNOUjFmMGh1LTk5YUlQdUdDYXh0SEprWGhUUmNBQUxZVDNGaThuU2gyVGRoUWZFdnJqX3o3elJ3ekNMdmRPYjNxRUF1alVQUi1leWF5em9jdVNyMkZoZC0tNmtOQXhKNHpBQTl3Wmo2SkliSkFad0RjdlpRdGpIN3o3NjVjNDlyUS1RWUhTR3F1WlBuZ3NJ
**Abstract** >In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al., provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone.
r/aiethics
comment
r/AIethics
2018-12-27
Z0FBQUFBQm9IVGJBVDBILXFTcFR2NWZhZ2tHUF9XSlpBY3RVTDBwbmp1RVVBOUp3OFUxNWd5NjBFalp4ejRPd3ZCSS00RG1XdXVGOExramdVWlltSVo2RW5RR2FTVm5EZUQ2NmZjVEM0NllHWlJLUkJtMjB0cEk9
Z0FBQUFBQm9IVGJCck04U1ZTLWt3Ni0wYzF1ZGhHNHRWYW5mZEtPa3JlZm1iNGF0SFZKVEdvUUFLYmVqYnhqcHJfY2pYLVRqbWJoSWRtUHM4RFZnY2FaVmRYZTRxblh1Q0ZzZDdpNEJmVmdOdHU2YmxSSHF3WlVZVXoyQWtkcUhWOEtreVdzVGFiV1Bja3dMbDBmMVk1LTFrZGFDbzE5ZHFBa1dyYUpjYVpDUlZwWFhBRjlwVEFISmNGcWQxYWh1MkFOdWhrRDE1aU9ia2U0OXFrRkxuSEZZV0hCaDJJZXZ1QT09
Deep in the Himalayas, on the border between China and India, lies the Kingdom of Bhutan, which has pledged to remain carbon neutral for all time. In this illuminating talk, Bhutan's Prime Minister Tshering Tobgay shares his country's mission to put happiness before economic growth and set a world standard for environmental preservation. TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more. Watch here: [https://www.polocknerd.com/bhutan/the-kingdom-of-bhutan-this-country-isnt-just-carbon-neutral-its-carbon-negative](https://www.polocknerd.com/bhutan/the-kingdom-of-bhutan-this-country-isnt-just-carbon-neutral-its-carbon-negative)
r/cleanenergy
post
r/CleanEnergy
2018-12-29
Z0FBQUFBQm9IVGJBS04xWFV6UjF2UU5POFpMZ2kwRFhkamE5NERvUWZHbUtpa3htQ1pxUXBHVUktaDRJWXRKZGpBSG5vY08yc3lNQ1hvVUhvbDRvMTRxYnlkUTVqWklFeXc9PQ==
Z0FBQUFBQm9IVGJCSXdpc1BhXzVpcGhBZFZKd3ZlcWVxa1AteEJybEJoYS1BSE52RG1LSkxsSmZ5cnVnOFVPbXRNUFo3S0poblR5eVlaeVhaRnZlYW9fVElBN0J1Mk4zazJFb3VMdjVlQlhVNHBEVGJRQU1fcy1kQjhvSC1VUnRIeU5HSWNRYlhhUW9MSzFCQnhUTUwyTEx5OURoZ2ktczlZSHI4bmxnZ2h6eWJqS3ZQdlBHRlM2Ukcwa3FHei1mTkhDb3Q0V09jY2Y2aHdsb3BZZktkZDNnU1p3QXdhZ21CUT09
With the rise of far right nationalist parties, and trump, what is there to stop these types of populists implementing the authoritarian style ai? And even when removed, will more liberal governments be inclined to remove such technology when they take back control? After all, the systems would have already been in place, and I guess (assume) that certain branches of the civil service would have come to rely on the "insights" provided by these systems and will push back heavily on the removal. Considering our current, rather blasé attitude (talking about Joe public here), will we really care? I don't think we will as long as things "work". Also, as the article mentions, behaviour can be modified. Younger generations growing up with this tech already in place will know no different. For the liberal democracies, do we need to move as quickly as possible to the benevolent ai that takes care of society without human involvement and corruption. I fear the authoritarian ai is a far easier goal to reach, and much like the environment, there is little benefit for current liberal democracies to implement the benevolent ai to replace them and their need for power and financial interests. So go robots, take over the world. We need you. Just don't terminate us please 🤣😣
r/aiethics
comment
r/AIethics
2018-12-30
Z0FBQUFBQm9IVGJBSEFCbWlqZHBGa1B2VWF0cWRzeXNnVGVONzAtaXlFYUtzQ2NyNkYwZmFLSnZmNFlzYTR1QXRHSlJzYnpPcUF4R0tyTDdIZTdqVmJOUHFHSTMwZjZWYXc9PQ==
Z0FBQUFBQm9IVGJCX3pTNWJRdzU5Z1NlYm5EUDhuWjJvUkplZEt2cnZIcjFIYzl1RXNrVzVtU2Z3ZTNhRXItY3FsNXlZakI0LXlEZHg5RFlzNlZXeTNfMUdUQnFIRFFZMjdTWDM0MWRBSm4tNjJZRHl6V295aGJSSlY1dkVSNE5yU3hoWXhiWHJneTdtWWkyQTM4b3VzNU1oRzh1V1o3M1luTnpFT0F0Rng1X1RBcGZ4SzNTTnl6Zkd3Y1Y0WG9hNGdoYkc1a0hUMl9fU0xKSXJKWDE2ZUZhbkoxTHhqVVJSZz09
We have only Asimo/Atlas plus Shadow/MPL hands plus iCub skin. These bodies are hard, large and heavy. There is the iCub which is smaller, but it's fragile and more of a show robot. There is no such thing as a soft and weak baby robot body in the real world. Disney's Baymax would come close if it were smaller. But there is also no human-comparable one shot learning algorithm, so we have to go simulation anyway. Agents in simulation cannot cause real damage and are therefore well-suited for learning. But the simulators are buggy and you cannot get humans into them, repeating the same lession a million times until stupid backpropagation got it. It could work for multiagents, where they can learn the consequences of their actions, plus some external human culture which is linked to the rewards to make their culture compatible with ours. The language of the guy who has always new cookies will be learned because he is causally connected with reward.
r/aiethics
comment
r/AIethics
2019-01-01
Z0FBQUFBQm9IVGJBY1lzR1JfOW5jU0Y4UHdOdGhCQzcyYlAtR3h3alk4UlVrYmM5cm5ONW1IVm9iVU45UTNORnZqTEdHaG1DYlo4eUE0TGR1ckZsRXNkRklPZnNxMXF0M1E9PQ==
Z0FBQUFBQm9IVGJCV0RKd080S0RnTHd5WC1sakJydk1BY0cxMHVtQm5EUnN2RzBLUnJ4SUR2MDVSMUpOZWpibVFTU3FhOE1GZm9FOTNFbkkzMzZudlRqbkliekNuWkY3R25oMHBndC0yNXY5RHpOYkRvZmtYVDZxUzlaTHg3QW84ZUhueHpteGt4bU1BODh5VFgzRndCR0RjZDhoQkh4cHpySFIwYlZqaGwwSld1Wk1ENVI4UXVFWVhjSUp5RDlIX1BxbjhzUEJlaUtI
I just came across this sub and it's rather empty. Where are all the clean energy patents owned by the oil companies?
r/cleanenergy
post
r/CleanEnergy
2019-01-19
Z0FBQUFBQm9IVGJBYnBDYWd6SHBTWkZWX203WW84RlVqMnNtaldWUjVDczJwRk9sVDk5VkdQZEFRaVNPTjJUWDV6WVFwMTlGMXZCNjJoT19CXzFDbmU3Ri1IdmRJLTNIUlVzS3MtWUltMlZkNFM0Wjd0NjdZWk09
Z0FBQUFBQm9IVGJCRXZfQld5ZE9SbWg0WlBiYmRMcjBCbHA1MVIzY1Y2SlRhQWtGM3JuX2tmUmxhTEVUU00yZUxFZmFHUkdaYzRsWVJKSnEtNE9UNHRZMEpBWjh3Y2w1Wm94V21Oc3JxVng0Y0JNNDFnQVhrcHkzSmFfbEUxZHBZZFpYaE9IMFZVN1NWMHMtcTEzQ1pPc0VwN1NBWGlGX005MHFjYV9VZ29sNFhhMGpPR2xGcnNFPQ==
https://infinitysav.com/magneticgenerator/
r/cleanenergy
comment
r/CleanEnergy
2019-01-19
Z0FBQUFBQm9IVGJBZWlYdzRNdTJHNFNFUFQwZzJuakFSd0hUWWlHYlJ0aWQ3bDdaVlZrWTRoUnhOWmwxQk1YeE9JNjM5UkltQXNMTnVrbF8wbFNXelNfZm1yLTdTX2Z4UGNMWkRCUURYWFpSS25vbnc2aExwMDQ9
Z0FBQUFBQm9IVGJCdFVtZUQ2VElzeHNLOWpmellVdWVoY0xlT2dsSzdQMFRneG03REZSdnI4bGtFN09Vdkk4b3p1NlhaYzRZNTFqSG40T1dVcXliRVl1aUhnNEs2QzdKNG5XanRBOVNGa2x3M3piWWpOT1FQUXZOMDlrSDZVTlF0dnVrdGpsOGpiTUZpSjViV19LQUMwNlZGUTdHTkU1M1J3Yk0yWXF6a2p0VEcyUDZkYy1TRmhVZ1ZzM2l3c0JVdW1SdjFndkRJWEVK
It would be a great help if anyone could fill this survey out for me thank you! If you have any questions please fill out the survey and feel free to ask!
r/cleanenergy
comment
r/CleanEnergy
2019-01-24
Z0FBQUFBQm9IVGJBTFBEQkJ0amc1ZG83U25sNXNGRmNvZ1VxZFhlSngxYTAyZ1lteFkzWUpsV0gzWk1wNUxYV2FKWVhmSWNvZDJUX0ptS2VncTNYaS0yNndHd0hhQnZrVnc9PQ==
Z0FBQUFBQm9IVGJCWUlSNllmNS1tQkphZTVKeVc5Y2EtOXZUWFh3WU1GbnczNU45Sjc0LWltZXdoQ0J0c3p3UWVra3Z5Yy1Wd2pZSXlUVTM5b0tTMXpzeC1ZWDN1TkQ2elNMUUlJNVFXTmI2Z3MwaDBpY2c2NFBGYUlKSlp1anVUM3g0Q0Vra0ZCRXdMcDBpRzh0ZktBRWVVVEY0UnpRUlVmWVJHS25YbW9wWnp4aGVvTjdJM3puWmxzRDF0TkhjdGx0NWZvZWlWSGFZNmxVQ0U0MDd6TmY4SkhYUVN1TGszZz09
well that escalated quickly.
r/aiethics
comment
r/AIethics
2019-02-15
Z0FBQUFBQm9IVGJBVUVUMDBTT1pWelFLelJKbFJENGJXSFlTcjhBX3JZY1ZVWGh6V1JyZ3RGYmVDbFFiVGREVWVTUDFDZlB1SVY3cWRkOTVpR3AwbUxrQ0dJRmJyc21pSWc9PQ==
Z0FBQUFBQm9IVGJCMzcwMm1wckhwX1gxRFFad0NRM1BZNFRNekdqSlR4dDNDNmExV3lCQzlWck9mWHFieVlqaGdnTDRucVltLXVGQ0ZLY05peGRRdUJkTVBHOXJlUjFhMTl6eTJvMmhsb3g2WENqbmRLTzBEYy1rMFRqSGpHMlZoeDl4cEtnSnY4TXd4ekJrUUI4OXBrM0otemFrdUhuTHdtcFFQcFNtNV82bWxrTHdGWGVWZG5ITnBFMVRoZ09PallvTkxndHpqdDM4VHhxc2N1NGpDcVoxRHUweXpsLUtkUT09
This is the best tl;dr I could make, [original](https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction) reduced by 87%. (I'm a bot) ***** > The creators of a revolutionary AI system that can write news stories and works of fiction - dubbed "Deepfakes for text" - have taken the unusual step of not releasing their research publicly, for fear of potential misuse. > OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough. > GPT2 is far more general purpose than previous text models. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/ar1uab/new_ai_fake_text_generator_may_be_too_dangerous/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 2.02, ~383629 tl;drs so far.") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **text**^#1 **GPT2**^#2 **new**^#3 **more**^#4 **model**^#5
r/aiethics
comment
r/AIethics
2019-02-15
Z0FBQUFBQm9IVGJBSjNKWXotZlMwb2VzMWw0NWZMUXVFQ3V0MmtYcEJSRkdGQnEybU5MWmJ1QXhPOWNNZmNZQjRIenlZaWNZYlQ3TzV6a0Fjc2MtQUZySlZvQ2Rpcm1SQUE9PQ==
Z0FBQUFBQm9IVGJCT0NuTWp2OHdiTUNicy1CcndQN2wwVmZDNGNKN3hNNHU2Mm4xVXVTRVBjS2p0ZG5nSlhzRG9LN2dmWWtEdmpGZlliRnM2SXRoMEoza3RHQWR1a3J5UGdXR00zazFlZWZQTHRjR2xraV9IcW5sU0JkS0xDTWlkSEdOLUwxTHE4MjhTZzFkMWl3TngzdlJ5N3FFWDVOcnk2M1pUNjd2bG44Y3JTX2hPNVJQMU5WRk9LbVNfMS00SGM1cGJGNm5SQmdsQTlUZTJyekR4UG5lSGpES0xqNUx5Zz09
It got super smart by surfing reddit. I like its style.
r/aiethics
comment
r/AIethics
2019-02-16
Z0FBQUFBQm9IVGJBQkt6OEEzZzlZeTZObFl5TmJFU0JwaV9kMFp4d3E4LVB5M0VoMjBXejAyX3ZsbGEwM01aRkVOSzZiSklyV1VTU1R5SktlVzliNlF4Nm82Z2tuMWlsZmc9PQ==
Z0FBQUFBQm9IVGJCdFpkamhWRVZKajZFY0VzY0xqdVJXSGFGRFd4dUdzR0h5MkFRTGNuMmVVRGV0NE9QTU5zU0RieWhaNWhNRlBvMkR2a19GSEFGeDB0dGZEYnpDTVd2OE5NYTZqcjFtdnAyVkJwcmM1d1lQMngxc0FscU10UlVmN2VabXM5Qm1mR3pYLXo5ZjlFMXJTbjdhZzd5dmJpZFJoQ0txeGxlMjkyeWZpelJ0NC1jb1A3X0p2cXhWb2RnYzd1eFFXRHRveWl5NlFGdXl3ZlFLNkZtZnY4UkgzckJhQT09
There is no substantial difference between a landmine and an autonomous killer drone. Yes, the logic of the later is more sophisticated, but in both cases, the problem is not so much with the ability to spare innocent lives, but with the dissolution of responsibility. If one such device fails, in the sense of killing an innocent, who is to blame? The designer, those who deploy the device, or simply "bad luck". This is not ethically acceptable, and that's why the UN is deciding to ban land mines and other autonomous killer devices. Because the ultimate goal of society ought not to be to design "safer" weapon systems, but to get rid of the need for all weapon systems anyhow.
r/aiethics
comment
r/AIethics
2019-02-20
Z0FBQUFBQm9IVGJBY0Z6RUtpRVRCQ21OX3NEN0xBb3VjdUhyQ2hYWFNEbVZxTWNVVUhWNFJBdklKdUdKNXd4VE8tUFdFM2FpSm8tcTFPRHpoamtUbGFHeWRPWEZHVThmMlE9PQ==
Z0FBQUFBQm9IVGJCeDJycG0xaC1xTlFUSjFWQXV5a1JuRW9qZVBFTkx5NENfV2R5cHkteEhYRzZQcUJGNHAwWWVBekRlbW4zWUtNOFVvaDNFdHYtM01oYThwV2xsY21qT1llZlVtVVp5QU9kYS0tdV9mUm1MM0xzSmY1RnRaRmtRc2tESF9RNk1Cd1VhSGtTeGFWVE84NVFPTHR2dHljYXoyYllOa052N1lDa0U2YUN0cDdqUGZyVFVXS2U4Tzh4TTU0Uy1fV2V2aHVPbk0tNHdDdmgxb3lvMmZqMjdfWU96QT09
I doubt that anyone in governments/the UN really worries about dissolution of responsibility - at least, not as a reason to outright oppose the killing technology. Obviously if a civilian gets killed by a landmine then you can just go ahead and blame the government or military who put mines in the ground, because that was a really irresponsible thing to do. And you can can blame the designer too if you think they knew that mines are irresponsible and dangerous and they could have gotten a different job. Even with physical mistreatment by human soldiers, assignment of responsibility is often unclear anyway, so technology doesn't make things substantially different. Landmines are restricted because they indiscriminately kill a lot of innocent people, not because it's too hard to point fingers at the people who are responsible.
r/aiethics
comment
r/AIethics
2019-02-21
Z0FBQUFBQm9IVGJBaXVHN0VwNk9kcnp5V3Bvc2VmVklPUXIwNV9mRGVYUFhEcnhwc3pnNDUtSVIzYTM4cDVsQzJTOU5GaG5Yck1kNzNrZkY5MklHT1hJckhrbkRVaGpxY0E9PQ==
Z0FBQUFBQm9IVGJCX1BXMnI2MmxVMzNxVFRTM0dwbFRjSC1KYmV0RV96NkxDNXh3cFpQWlVldE5rY3hMdlcwZHNWR21fUEQ4NEJXMkR3UkVxYnBtdDMwMVF3bUhreUgtWGFxTldaMVoyTzNUMUI4THNaS21iQ0JoQkVDdzNyc1pyZWZtOVhYcENyaFNQTGhzOVlCNFhteUFIYkJ4Q3VCbmlISU5fUFBpQjB2emUtbG01eENfOE9xM2xJamVPdGthZEd5Q0Q2ZFYybnJabnQxcnRKTkpqRkpsM1hRTG1aYlF4dz09
Welcome to /r/Backpacking. It has now been over 10 years of this subreddit, and we just passed our 1,000,000th subscriber! By popular demand, this subreddit explores both uses of the word Backpaking: [Wilderness](https://en.wikipedia.org/wiki/Backpacking_(wilderness\)) and [Travel](https://en.wikipedia.org/wiki/Backpacking_(travel\)) Below are the rules and links to the dozens of related subreddits, many of which focus on more specific aspects of Backpacking of both types, and specific geographic locations. (The other main reason this post is here is so that the weekly thread works properly. Otherwise there would be two weekly threads showing.) **Rules** 1. All posts must be flaired "Wilderness" or "Travel" 1. Submissions must include a short paragraph describing your trip. Submitted content should be of high-quality. Low effort posting of very general information is not useful. Posts must include a trip report of at least 150 characters or a short paragraph with trip details. 1. This is a community of users, not a platform for advertisement, self promotion, surveys, or blogspam. [Acceptable Self-Promotion](https://www.reddit.com/wiki/selfpromotion) means at least participating in non-commercial/non-self promotional ways more often than not. 1. Be courteous and civil. Polite, constructive criticism of ideas is acceptable. Unconstructive criticism of individuals and usage of strong profanity is unacceptable. 1. All photos and videos must be Original Content 1. Follow [Rediquette.](https://www.reddit.com/wiki/reddiquette) If you have any questions, or are unsure whether something is ok to post, feel free to contact the moderators. **Related Subreddits:** * /r/Travel * /r/SoloTravel * /r/Shoestring ← Travelers on shoestring budgets * /r/Adventures * /r/CouchSurfing * /r/Tourguide * /r/Travelpartners * /r/TravelTales * /r/Travelphotos * /r/BackpackingPictures * /r/longtermtravel * /r/AskEurope **Wilderness Subreddits** * /r/WildernessBackpacking * /r/Camping * /r/Hiking * /r/Alpinism * /r/Mountaineering * /r/Canyoneering * /r/SearchAndRescue * /r/Canoecamping * /r/Trailguides * /r/BackpackingDogs * /r/Adventures * /r/MotoCamping ← Motorcycle Camping * /r/Overlanding ← Vehicle camping in remote places * /r/snowshoeing * /r/AnimalTracking * /r/Packgoats **Gear and Food Subreddits** * /r/Ultralight * /r/Hammocks * /r/Hammockcamping * /r/TrailMeals * /r/MYOG ← Make Your Own Gear * /r/CampingGear ← Camping Equipment * /r/GearTrade ← Trade for Gear * /r/ULgeartrade ← Ultralight Gear Trade * /r/Flashlight * /r/Axesaw ← Hilariously Ineffective Camping Gear * /r/GoPro * /r/MilitaryGear * /r/WorkBoots * /r/First_Aid * /r/FirstAid * /r/WildernessMedicine/ **Outdoors Activity Subreddits** * /r/Climbing * /r/Slackline ← Core and Balance training, balancing on webbing. * /r/Kayaking ← Kayaking * /r/Whitewater * /r/Canoeing * /r/Caving * /r/Outdoors ← General "Outdoors" * /r/Shoestring ← Travelers on shoestring budgets * /r/ParkRangers * /r/Adrenaline ← Mostly Videos of high-adrenaline sports * /r/trailguides ← Guides to trails * /r/Survival **Destination Subreddits** * /r/Adirondacks ← Adirondack state park in NY * /r/AppalachianTrail ← East Coast U.S. * /r/AZCamping ← Arizona Camping * /r/BigBendTX ← Big Bend NP, Texas * /r/CatSkills ← Catskill State Park, NY * /r/Coloradohikers/ ← Colorado Hikers * /r/CampAndHikeFlorida ← Florida * /r/GrandCanyon ← in Arizona * /r/GeorgiaCampAndHike ← Georgia * /r/JMT ← John Muir Trail, CA * /r/JoshuaTree ← Joshua Tree NP, CA * /r/CampAndHikeMichigan ← Michigan * /r/Ulmidwest ← Midwest Ultralight * /r/MinnesotaCamping ← Minnesota * /r/MOutdoors/ ← Missouri Camping * /r/Glacier ← NP, Montana * /r/NCTrails/ ← North Carolina * /r/NorCalHiking/ ← Northern California * /r/OhioHiking/ ← Ohio * /r/OhioCamping ← Ohio * /r/PacificCrestTrail ← Pacific Crest Trail * /r/PNWhiking/ ← Pacific Northwest * /r/PAWilds ← Pennsylvania Wilds * /r/OutdoorScotland ← Scotland * /r/SoCalHiking ← Southern California * /r/TXoutdoors/ ← Texas * /r/UKhiking ← United Kingdom * /r/VancouverHiking/ ← Vancouver * /r/VIRGINIA_HIKING/ ← Virginia * /r/WAOutdoors/ ← Washington State * /r/WMNF ← White Mountains of NH * /r/Yellowstone ← Yellowstone NP * /r/Yosemite ← Yosemite NP in California * /r/Longtrail ← Vermont * /r/GuessThatSpot ← Guess where? * /r/NationalPark ← U.S.
r/backpacking
post
r/backpacking
2019-02-26
Z0FBQUFBQm9IVGJBc1dfNFJVUUdUa0JEV1lhQWpLcTE4X3NtNzVhRDMtUkxnZ1pRdVd0Rk5QWmJReVAtRjlURXZIaVVITVppYjd0WmM0dW45c3B1ZkgzS1RKZHo0aUV2aUE9PQ==
Z0FBQUFBQm9IVGJCNXc2anFzM1VEX1VvSXRTUUx1TTBYZTRtTm5oWFpNOFBkYXlaRXNrb1JMeFdwSnh3UlJqaU1oZzRLQkhDeFFDZHQ3WGRqbVVnYzNLN29rOFZsYUx2RnRkUDVzV1djNGo3dnp0bXdTTXV2Q3F4ak1LN3k5Q1BiUzNhbXZPa09veF9PZXgwRHFBcXpyejd3VkJ6SFVzWVczX2RnVlRWN1FCS1hUaUhaZHhGSHhVPQ==
There are literally tens of these in existence. Have you tried googling?
r/aiethics
comment
r/AIethics
2019-02-28
Z0FBQUFBQm9IVGJBalZUb2xuS2Zva0FiTkpxYUJoNl8zTmhLY1NqMlhsQ3UzRURQWGFkYXMyYjBYZnJjYmEwZ2M3VVpuX1dXMmlRbUxLanFKOHVfNjl2U2xRZzYtZVZmWEE9PQ==
Z0FBQUFBQm9IVGJCOHdOSU1JZnJMNWJoTy1xb1JVNGwyZUZwMERGa3NZeE1KRVkzLWVWSTRqbDFzOFItblBiczFmaDVEaDlZZ2FqZzZRSngxODVmdnhPazRYYmVmOWlINFc3U282RGlLQjRGdXBlSFVNUFRhV2U2eTZoZXpKb29KbmtLX2FtNVNQTkJIQjV0MzVGRVBqX2s1NWFqUkNjUllwQXY3YnVPNThXTzBEdVBzamUwUnE5VmZZX21KaGhDSTBtQ2pOMGwxb2YxY2JlTURPV3ZLdW5iZGJuZ1AxZGpzZz09
The EU has released a draft on AI ethics recently after a big multidisciplinary expert panel consisting of many scientists, AI/machine learning engineers, CEO's of ML/AI companies and many more came together to create a guideline. https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai
r/aiethics
comment
r/AIethics
2019-02-28
Z0FBQUFBQm9IVGJBQkl5RFZqSzlrMFpFWllmU1FVNTNuRE1uR21leU1ETng3S1NYX005U3hKTDVIcS1ZWkUxLVVPMWZlamY1aGJtTGRvaUZRMGJweWw4VVluaURmYjNGUnc9PQ==
Z0FBQUFBQm9IVGJCYUpsVl9OVDVMVlg3Mlc5Z0p1Uk1MRkdyTG5YNVpKQmJJQ1VScDVQdWNXS2NGQWh1dmV5aUpTcmpxYTdQaWQ1Z0NaaERadWFKNG1qYzM1UGNYNzdSZjdnREttUkk0N3czSVFHeGtidjJZclZpa0RsVkRwaDZWbjNldnlsM0JFbUZMSFFKV1F0MEVPTm5kZ05jaTYxRWMxQmtuTExJQWFIUm9fVVBILV9BT2ZzbHFSc0w4eExjN25XTjlEYVgwNWZFb1hhY0tFcGNiSVc2VEVUTWpvQndrdz09
I don't exactly have the time to google it, but EU recently requested from all the member countries to provide an assesment of future development of AI in their country, one of the parts being ethical and legal issues. Some of them are good some of them sucks, go through it, it is quite recent (couple of months)
r/aiethics
comment
r/AIethics
2019-02-28
Z0FBQUFBQm9IVGJBVGZ3aFY1bjMxaVBnX0doTjE3OERGQUpHNWJrMzJaTEZ6UU16RElDRDBzZlRFSmZDOVp3TlNmN3F2bnNsZTU0UWNTakZSbUVKSUhnSXB2MW11QXk1RnJ1UEtvMzZXMmNuLXlDeDM2em5RRVU9
Z0FBQUFBQm9IVGJCMTZMTlpxQkZZa19SbGkwMGoteDZPd3FUM3JNdFpCMC03bU9uMUp3X1lScDB3M2FpTlVRS2phOGdMNFlIdmoxNl9La0h1eUctUEJQdHh6SC1zQkRtdWxSSnRyNjU4MTFTbnNkR1JzM0JrZ2tXU2NBX2N4cGtuUDBhR3ZYRWYwRUgxTXB0MkxONHkxZDk2ZS1tRWU2X0NWMjROTnVCMnpjcm4tN1lORk9KQ1NlWDduYUJUenY0Y0t1SnVrbTA5TERDWG5SdTY1M0NxSDF4QVEyRzBZXzRZdz09
Both for Project Maven, and for JAIC (https://federalnewsnetwork.com/artificial-intelligence/2019/02/dod-rips-wrapping-paper-off-of-new-joint-ai-center/). JAIC seems like a good idea; DOD will need to cultivate self-sustaining AI talent because Silicon Valley is too shaky about the military.
r/aiethics
comment
r/AIethics
2019-03-16
Z0FBQUFBQm9IVGJBTHV4bVFQTUM0Y3Bnb3lsSGY0YXVlSGhSU3BaVnVKSmpqN2JYbDFVUnBnUVNBNk9uX1hsYWFwd2RFc25FYnduSm5ESDZSbjRHTGdLRFU4T19QR0VoVWc9PQ==
Z0FBQUFBQm9IVGJCanZjVU5seVpkQkJlWHdFTjRPY0VEOU9pWXp4N1NBTEVhWDg0ZjNnQnNJT093bEJpMXZZRnFFM1JvWGUxcGNMUXNpLVREcV9VQlRGR2l0d0V0d2EyUTNyLThub2oxa0RJc2N2U1ZUUVBNYnVzREFDb0NnUlVjS2FDMUNHcmNRYXRDeXFUOTgxSXVmLWNyV3BsdW96VTVuYklYY3JKd1AxVVlXWXpfbGJMem13a3Z6RXA4MXFKOUJrRDVLUEU0UGQxb2xKX1h0Q0h5b3pNT1ZfTnAxbkpOUT09
Wow, this guy is really going hard on "the US Government are the good guys". News flash: The US Government is not the good guys. The government is generally incompetent, which is good because it's also generally evil. Like "puts more Americans in danger", fuck you, for every American not 'in danger' dozens or hundreds of [foreign people](https://www.snopes.com/tachyon/2015/11/dr-seuss-adolf-the-wolf.jpg?w=608) are killed. Their lives are no less precious for being unlucky enough to be born in a hovel in the Middle East instead of a wealthy democracy.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBd3otN3JjMkk3eDJTejBCU2VqTnB4NFlaRFNwR2pNWWRfUGRyS0NISl8zVl9Fc2xUVEk5a1lGR2J3ZEU0dnFodEFXZ2o2OTFKcC13eVhucjZIQ1RHLUE9PQ==
Z0FBQUFBQm9IVGJCTldOX04wRE5WV1NXYXhlTG5JNXl0ZGlEQ1NiM2xZaUZDSFp3VmViVEpLV2ozaDZTOUo3akdxWnJLQTNWR1VMVjNqMEdMblFkZ0VkRzc2Rk5KZWJDVE80MkVXV2RaRExMeldRYlBXeEFxeUp2OWxjNUQ2QWhBZkY0akhPX2dOclY5QmFBc3lRUlhRRklyZVllNjN4aVUxdUlfQXdrX1NKQnhrazk4WndBNlBKdGtWUjdiNzMwZ09CLVM2dHNBQWRpMWlTQkxYOHNDeXJuMGdnZXN3SEpqZz09
Please cite your claims (instead of posting political cartoons), thanks.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBS3lRUDFuSlFGWExXWDZNb0pGNkw0dUREdkFhZjV3bWJIT3FpTFZ4aTF1dWRVVnJUdnJxNlozRkQtRTFFbi1hY1I3bHVQc3YwT2FHNkhjMk5ubnlseFE9PQ==
Z0FBQUFBQm9IVGJCZUpYS1owamlQYzRPa0gwQnJQcnJ2Zm9JaDJZak5udWdwdGdqNUJlWHJjSFpESVM1UGZEWVNOSE1BU1FwRlpyVlBRVVZuZER4Y2M3RUpRMHZyS1ZPdDNNTHBPV0N6NjhEaW1wUVQxX1NMZTF2ZDUtaDYxM2VqVDcxTjcxMFBUZWdKdmtyamhnOVY4MmgzV0JXZXB4bk1uZjcxYy1fY1hfMVR1X0FobnAxRHI3cjlDX2xyRmNaa0dPcHNxOTQwNl9sUHdJNFk3amdEbzFndnVOa3JFY1BHdz09
The author didn't cite anything for his claims the USG are good, so other than the uncontroversial statements that the US routinely engages in extrajudicial murder with very high collateral damage (drone strikes), including targeting its own citizens (which can be found in Wikipedia) no citations are necessary.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBb1ZFWU9vX0RvdDJuNDh1eHkxanprOUpZNl9LaWtxN0JaQUEtUWJJZzhTbWZxdFB0MmttbW8yZkRCZ05fRzJ2WWxobEZVc0I0aE1zNkpfWURxRmE5NUE9PQ==
Z0FBQUFBQm9IVGJCVU5EdW9mZm45UmNxdkhVbVBNUlFFY3AxVWZjMWRpMjFzaGRuc3I1QjN1MXJkakprZzRxcjVNdkVBRDdmbUVRRFR1T0JJZHFkZzlxTERmMExucFBWbmFPNjB3YkZIY19HYnlVV25KZ0dJai1OMTd1Qm0tTEw3UmpVeWtNYVBMT3dUWWhQWnl1OTVkWkZEVmU1cjJPS1ZwT3k1X3ZPU0Jtdk1sdnJjbWt5c2Q1d3NkbVE3WGdaQ3Vzd2I5dVBoODFXakFkY1hDejlwdHFLVkNvVHBScTlSUT09
>The author didn't cite anything for his claims the USG are good He cites for typical responses. He doesn't address arguments for anarchy because anarchists are rare. >other than the uncontroversial statements that the US routinely engages in extrajudicial murder with very high collateral damage That's definitely controversial - not the actual events that you're referring to, but your characterization of them and your implicit moral judgments. Even if correct, it still doesn't properly solve questions about who "the good guys" are - for one thing, there are many domains of international policy besides counterterrorism.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBc0ZuMUo5clFIc3JYSWhvU0o3dlVQY0tBUi1hZ19MT0RHNV9wbTd3bk1DLW1wQTFqbFl2c2duRnF5UW9JdG1vb2VqLXhHOERLNDhxTEdZOE1IVENYbVE9PQ==
Z0FBQUFBQm9IVGJCWFlNVlF5bnZzZ3JjZW5OdFByc0pNZHZNTENtdlhMbk14VFFJOFpkcmdMTk5DNzZ0TTVsMGpMR3NJWGRQQTN5QnFBUVpTcXl4aTRKb1ZzMkM4cVhSTFFVT0ZKN2s1X253SF91NUVEZzFueXUxQU9qaVBxdk0yYjRjMm5uU0tKVkQ4UHU5Z1hrZm4xVGJRNkxaRVRvcW45Y3oxTGpjOHF0WTJUUDlSQVFteVhad1RNUFFLbmE5TnM5Q3BPQm55Z3hWcnNzbTFpUWpkMVBhZzNGTTRPSE83QT09
No one, not even the military itself, disputes the characterization of drone strikes as extrajudicial killings, disputes the fact that collateral damage is very high (especially given the number of targets who have been recorded as 'killed' two or more times, which implies that the collateral damage rate even including only the designated targets is high), or disputes the fact that targets have included people who are full US citizens and thus entitled to due process of law under the Constitution. Distrusting the government is not anarchy, and the false equivocation you're doing underscores that you are entirely intellectually dishonest and not trying to make good faith arguments.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBZ1YzSElpbmRCZGx0UEJ2ZHhsd1VvdzF4YVhEUE14NVVQTDhHbmx1ZUpfb3hUNURQNEljbzlOTnRSU3NKTWtCVDRVZEY3Z3dtQThpZ3Azb3hyWXpWX0E9PQ==
Z0FBQUFBQm9IVGJCMlRqN3VtbmVxaHlnVmRmUVBKZU1HRGFOamNscmptZERab1NqSTNwRkgwaEpHXzlhanp4TnFFWnRKLUY3WldQUzNWeUc4MFd2eEk3Tmp4SEZUZ3hkeHhUbU1CdW1XbjRUX1hTcnBCd2JqdkVYcjdqRUI1SEVaOWdVSC1Zck4tUERDREh4OHQxUWpidjNVMG5ZMktEalBnbHBGaXJ5Skd1Z21QZmdPOEJ6SExHYlhZWUFtMk56NzZ2bFBVd0FIb0E1VVcyMkg1SmVLTWE5bFY1ZENDZExEQT09
> No one, not even the military itself, disputes the characterization of drone strikes as extrajudicial killings But you called them "extrajudicial murders". >disputes the fact that collateral damage is very high I dispute it. The collateral damage is generally proportional to the objectives. >disputes the fact that targets have included people who are full US citizens and thus entitled to due process of law under the Constitution I dispute that too. Those citizens should not get due process of law when they are threats and extradition is *de facto* impossible. >Distrusting the government is not anarchy But you weren't talking about "distrusting the government". You were talking about whether the government is "good" or "generally evil." Just because you shouldn't trust something doesn't mean it isn't good.
r/aiethics
comment
r/AIethics
2019-03-22
Z0FBQUFBQm9IVGJBS3REVF9KdHU1a3RTYy02bngzT0VmZ0N4UEpERUd0eEJYM196dHllLUcwMjh3OGpOSUpFQzFhUW9nRmRTd19paDY1RFN4Z19RTVc4MTZqblVWV3BEQ0E9PQ==
Z0FBQUFBQm9IVGJCWFVBaDgtZE9PelhmOWtCUFdrc2Y3RWowcFJRcFJpYjdva3Z3U1lna1BoY2VTVFdtYTFtdC11OHZTM0lrU1VmY19IZ2N4VFo1UkpvRXdrWmxBRTlaUVZYVXNTQ1hyeUUwRGt2cGcxVmZqcDJWeDhkLXA1Tkd1TG5pUkhYOUhqaVhEd3EwOTFoZ3ZlS3RqVWlpUFVYNlp5a0pnbDBGWk5Kd19xaFdPSGMyelNJZUNlWTZZcjhxUnhuY1V6dEVJR1ViR2lpR3ZmMjdsOFNLS3NUMl9aU2VHdz09
A major use case for military AI applications is to provide better intelligence and reduce the collateral damage of various attack modes (drone or otherwise). A major use case for autonomous or semi-autonomous weapons is replacing soldiers with them so that they can run at a lower threat level. If a potential civilian is acting suspicious but has not demonstrated lethal intent, an 18 year old kid with a machine gun may panic and shoot him to protect his own life. A robot with a machine gun and loudspeaker has no need to do the same. Whether you consider the USG good or evil is irrelevant. They demonstrably do attempt to minimize collateral damage subject to the constraint of achieving other goals - i.e Obama did not want to [cause the red wedding](https://www.theatlantic.com/international/archive/2014/01/the-wedding-that-a-us-drone-strike-turned-into-a-funeral/282936/) and better intelligence could have prevented it.
r/aiethics
comment
r/AIethics
2019-03-23
Z0FBQUFBQm9IVGJBWUtIOWMwQlNoY2hFTGZTWThuUVNhWmo1RUgxSnJ6enJvcEtVWjRmSFhJay1OUlFMWXVkV0FWdnhxQkp1Q1F5cGtBaTNWQjlXbldGSjI0Z1pWSXhRRVE9PQ==
Z0FBQUFBQm9IVGJCTmtkZUFTcEtLSTBNSVE2RTJ1WlhpZkQwYm43OUVoZFV2eE8wNzNKODlOUFBxM0p4NnZaOTFPcTNxdzFxWnlkbl9HR2xadnVYQkJNNngyUXl6MUFVMnBIRlhjbnNibjhKSU0zLVpOWHk4N0lKNlVQNmZ3dDRfazkxdzRTOVBpYXFFay1HMGdiZHdZMFNacEl4OTFSMEhCbXlleFZwWnVmQWphQWZzUWlLb3RXaHpZQVVwRWxBbjd2SWVib213UFJnQ0pxX2Nqa2ZLekg4SlVFeFE2VFN5QT09
Gonna go ahead and remove this - calling people "fascist" (with such a misunderstanding of [what fascism is](https://en.wikipedia.org/wiki/Definitions_of_fascism)) won't fly here.
r/aiethics
comment
r/AIethics
2019-03-23
Z0FBQUFBQm9IVGJBMzI4NGZRVnBtU21WMVhIR1h1RE4wc1dMcC1WRW5TQVhWVS1DQWJsRU8tZE1PNXh2dk0yWm45WHBxX2JVXzVPckhkVFJMVVJZTWg1TUZ0dnR1S21sVHc9PQ==
Z0FBQUFBQm9IVGJCdXd5TTRYOGtBdnZja2RfQXR4bUgwQXA1clJqUEFKMzloZzNwN3RVSXhBekxDSTE4dzBWU3lnSG1lSFY4MVVhYXJ1dDdYNVB0X2xramNtbXk0Rmd0ZlBIdkx1UmRiOU1xZGt6b3JSWmtnc0NsWTk1Y2FESnhFUmI4X1VtOFZhSExNMGFycmhNUUNpWXJoTEFNOXRwazl4NUFKb0pDOFlUU1BWcExMVnE5Z1UxWmdOZ3B3cTBDWDBQRGcwVmtFV09oRWRreDRmczRKZ1U0Tnprb2p3eVJnUT09
Everyone Welcome - Come and meet people interested in sharing a Great Meal while learning about what The Dept of Energy is doing to combat climate change and clean up the environment! Senator Mike Barrett will be giving an brief talk regarding Massachusetts Initiatives to mitigate Climate Change Scott Blanchet, Chief Development Officer at Nuvera Fuel Cells will follow Senator Barrett's talk with brief talk on the Commercialization of Fuel Cells at Nuvera. The Keynote speaker, Charles Myers, President of the Massachusetts Hydrogen Coalition will provide a technology update and insights into the wide range of applications hydrogen can play a significant role in.
r/cleanenergy
comment
r/CleanEnergy
2019-03-31
Z0FBQUFBQm9IVGJBY2VBXzFBd2ZVSm00SWVHY0cxQjl3VEFaZk4wYXlJc3dSQ1otTGJ4WkFmYV9SejNTeEhoTXhTdG1QRFJWeFptUFN5emRhTW9nUWNWdUtZT25NbTZaWUE9PQ==
Z0FBQUFBQm9IVGJCdkZaOHJIM3I4VkpmRkRpeEd6TGhLRWR1TXZsSUJySHJhUDlwRXppY1hZbUUwZkFYUU1NV213OE1FOGVzaFNiV0pCQW15ZkF6SmU5TmpENzdLb0liVEVDbTJRb1I0S25RdEFYeFhEbmlDYm91RUtVc1ljeGFBcUl3UVdoR0k1bFlfWFREWFhmNzdWUW1KSnJEbkFhUVYxNkMxOGtESnNua0FpZjNJT0c1Y1YxbDhCejVtT2tzU1dCRW5XSGhPZXRQTWNhZVE3SFYycXJzSVdFa1dxQTN4UT09
As usual, everyone wants AI ethics - but only as long as it's *my* ethics. I rather doubt that Google will cave in - they absolutely would have known that people would react this way if they picked these people, and they're trying to get some broader credibility for representing wide points of view, because they have wider stakeholders outside of Silicon Valley. This kind of thing is not new for them. Not really 'embarrassing', we'll see how it plays out.
r/aiethics
comment
r/AIethics
2019-04-03
Z0FBQUFBQm9IVGJBWjdJMW5tVzJpemhmUUNGQTB2V3RNeUs0Rm1WRkxuY0ZZVTloWW8tVVJoX09uM2dEdkUtdHlSNExXN252V0lUVmVTVzNMRzMtQWRtbEtjcWw3ZkhYelE9PQ==
Z0FBQUFBQm9IVGJCVGFoYVIybTNBVUNFaGZkMWRHU3VnYmpqTHlBQTJtSEFmZlJrOEVKTHhRcW1WSFFvcldhdWhBV0I2R0hYVTY2MmlCWkNVVFBwaDd6MkVidGNhejBLSW0xd280OUxfWld0MFpuakF6MXRERnBTZExaaUJ5UW1RaEM0TERDZDhTdHR2ZWQ4dG5QbDUyZ1o3TWtQeURsdnA5M1NxQ1NQS09UYmhFem0xMXpBM0otc0lYcm1OWUNsTHZaUGJ4bkJQTXR0VEFYYUpRMTQ2UjMxSWs2Wnp3YTNFUT09
This point can and should be made more seriously, let's stick to R3.
r/aiethics
comment
r/AIethics
2019-04-04
Z0FBQUFBQm9IVGJBNVRkZ29qd0NfUDZQNWx3YzA3ZWwwUjl1Q3NZV19KYkxwa3JuUjVadmJqRGJETE9zTnlCWHYyWE95OW1Iamo1c0k0M2JJYnFvbnBkTEIzZFBGNnNodWc9PQ==
Z0FBQUFBQm9IVGJCUUY4ZlJudEFKWlpQUHpOdXVQUE54NzFUR3QxN3RVR0x2WlJqSGV1TFBrS21WeUJKcTFRRkU1YTB4NUxoeEhyd29LSm1VaXdKNmVoWC1iZElUWXlncXZCVmpMcVRUTjRQRUFUVHFwRWVsT21heFpxeTBndmFYZ1B2bG1kMm52N2FLbU5zNHNHNkNCLVdacmQ2X1hGeUtJWW0zZlQxUmdNaGJKLTNVRzA2UEpiUFF4SUhGaWQyQTVuNzZWdjh5UktSYkYzVDU4UFo2LXZwblQwUEtfbUpDZz09
To a first approximation, my guess would be that I probably disagree with pretty much every one of Kay Coles James's standpoints, but I still think it is good and commendable for the very progressive Google to put a conservative person like her on their ethics board. After all, 50% of Americans is conservative. I'm a lot less sure about the inclusion of Dyan Gibbens. I think it's good to have a diversity of viewpoints on the ethics board, but I'm not sure what makes the viewpoint of a drone company specifically valuable. I'm not saying they should only include Campaigners Against Killer Robots, but it would make more sense to me to include someone from e.g. the army to represent the opposite viewpoint. I don't really think it's a bad thing that this board doesn't have that much power, and it makes sense to me that its role is purely advisory. I think companies should make their own ethical decisions, and be held responsible for them. Would it really help if some external board they appointed themselves made their decisions for them? I don't accept the "ethical authority" of anyone on any board Google (or another authority might appoint). The most they can do is point to situations where ethical dilemmas might arise, and provide guidance on how potentially bad outcomes could be mitigated. That's also why I think it's good to have diversity: it means they'll probably be notified of more issues. This also means that if James tells them "your algorithm doesn't discriminate against transsexuals enough", Google can just respond with "Great!". (And even aside from this, I think it would be good to not disqualify people if they have a few characteristics you dislike: maybe James is a good advocate for free enterprise or whatever, and someone else can be a good advocate on LGBT+ rights.) However, I do have to say that I'm not quite sure what the value of this board is in terms of actually making Google more ethical. I understand it has the potential to make them *look* more ethical and legitimate, but that doesn't seem like a good thing to me. I guess having 8 successful, influential people come together and advice you 4x per year is not nothing, but I also wonder whether it wouldn't be more beneficial to employ some full-time people to pour over Bryson, Floridi et al's work, and keep an eye out for what public figures and organizations yell at Google. But Bryson seems to think "what [she] know[s] is more useful than [her] level of fame is validating", so I guess that's fine by me.
r/aiethics
comment
r/AIethics
2019-04-04
Z0FBQUFBQm9IVGJBN3JwdXNiblByTEduUDUxdkRfMl94enRIbEgtZnJ5eXZaR2NHYkdSYWpTU2ZKdVBud1d6MUp3c3BsZjlUT2RuMjlWNnZKamQ3R0lzcUEwOHpGUEdzc3c9PQ==
Z0FBQUFBQm9IVGJCbjF6TDR2Vk5iRXNOd29oWUZmdEZ1QjEwNktPZDBUTlBxeXVpMGZobGpSc0UwdkZIcHE3RWxrRG0tUjUwOU9lUVZJZnBDODdHTlp5SGR1Sl9CQ2JxRXc1SU9rQzg0T2lScGZ4aUphZVE1ODdKTjVmYVA2Yk1rak5MbDVGRHpiZ29JeUlWUHJnWE9yUUhPRVJZYlFNYmVtZk1Zd0J2cDlzSUZjOUtIeG5DMGxSemdFQnpMN3IwYU5vOXRyQ1FDR1dqREN4aXhMNUlCMnF0Y2pXSnJvY3pRdz09
Don't be ~~Evil~~ too hasty now... we might be able to "make the world a better place"
r/aiethics
comment
r/AIethics
2019-04-05
Z0FBQUFBQm9IVGJBa1Robk1XZnBhd1FVdFFoTHRfMVZTOGRqeUhaTlQ4SXRleXdRSDcxUFFfNUhDSVJCdUw3ZjU4UWJucGlyXzdyWW5ZT0ttZjgzQW92YThIdmJoZTBYT3c9PQ==
Z0FBQUFBQm9IVGJCZXdlTWdUZ0l3cUdvdjAwVVRUQTFpMTlia2YxdDMzNGFfbEM3aFVEMHFSeU5SRDZfYVYtUmNUWi0ydERBQldVS0hWZkZ1a2FPTDRydDYtV2VxY1oyQmFSUEVDeUkzWVdJN1Q3VFBqa1JHMFctWDBROFZ3aU1KektpX0RqREI3MGdWdDV4MnNrdmdaZUc4dTNDNHA3UkJJblBCRHJySEJKbGZwQXNJbGdOeF94UTVjX1VGc0sxZFlseUxGZTFPVVFkVjkxLW1OYXk2RDBYd3pzWXhfNDBQUT09
Someone reported this for its source. Yes, it's Breitbart. Yesterday I posted from Vox. If you can point out a problem with the article content I'll delete it. Otherwise I don't judge based on the source.
r/aiethics
comment
r/AIethics
2019-04-05
Z0FBQUFBQm9IVGJBYTF5SW5rdWdKeV9SQy1uS0F1NjJpQ2tKZDBaX081ZEtCelBXaGdlN3A0S181STExbHZxc3FlaWFKUnVBemlsQ2pFTGgwZ2ltcXg2ekkzRnFUSzdFQ2c9PQ==
Z0FBQUFBQm9IVGJCMHo3QldyWVdZRzRPbW40WVJMQi1jdmVVMFE1WWNoMEQ1OUM4ek9MWHgwVFREOERHVmZKeFc0TTRMQkFrRXQ5MWM4UjJyeTNGTnB5Z0hGVXhjdDJTVTFMcTAxNUdubE9FZ1BZeGFOVEpuNHZZRFR0R3VEUEd5XzlvamI0dHBBdTZwa3B3NXlnUWt0MXN2UXFOTWhlUVdlampRb2NBUlNyTW82dTdkVnBTNnpIVjVYYk1CRHJjQWZpcWU0UzIxbTJHZ2VlNEhwc3pYb0trNUxkc0E0a0RSQT09
Turns out I was wrong. How did Google not see this coming???
r/aiethics
comment
r/AIethics
2019-04-05
Z0FBQUFBQm9IVGJBM0phRU9qTzdmWXMwUThzQTJzQkQ2ZHRIU0lXMDg0RXNUSVo3NVQ2cmFNckFlMFp5dy1ocnlXMWozWXZDZi1LTWRlekJoRnlKR28yZDNCOUVPVkNsWnc9PQ==
Z0FBQUFBQm9IVGJCRzdMQTV6TG5tRTE4ckY5Wk1Db1BDTVREUlhZNWNzb19LT1hjWWVPbHZQaFFtNjdlSU1rdlYtNFBhWW16Mnl4Z0MxSUJzWjFWcExrTGxOT2tjX0ZQTmx3X1J5Tk5PZ3JFUW1xZEh3VUswbVdpN1Z1czJ5R045NFZZUElxVFhxVUt1WGV5cFBtQnpzZGVfQzl0NUZJN0NkX1pSczBpOFdwUlJrVl9IbnZEZ2M0X1lTSXFaMDNqWmNKVWEwYXBvMGtmbG9HYjFIdTZrdGtfV3RqdTRtVDU5UT09
Minority report irl
r/aiethics
comment
r/AIethics
2019-04-06
Z0FBQUFBQm9IVGJBbk9GOFEtMnVCRWZhMF9KdURpLXRQa01GZDd1TW1wdk96N3ZyWEpNeXNZand5NUEwZGVOVkZfUVc4N1hBN0lDN3ZMcTBvV0NTYy1pY1FCRXB3VVhDVWc9PQ==
Z0FBQUFBQm9IVGJCUEdzQkQzWmc2OE9KOEFCdl9xaUxqaWN3SXA1Vk5sTTJ4cC0xbzlXcThpNVV5RXlIZDQ2SFpGQUJnSW1lZkpDbDBvdzhINnVadmtzbm8xdWJGSHhwSGRiVDNiTElocVJ4X05heGVKWkE4NERNdTJlTDR5SWpMZk44c2RSRkpTei1jR1NCVGJxOHN6SjRJOWRXWFpVWDR6MTRHVmhBN0h0a1NySjcyYV9yakVKT3FLaW1kc0NSMFo3dFpNMkJPTGFYc0w1WTg5RHRvSVA3MWV1RlM4OHl1QT09
Just so we're all clear, predictive policing in this context does not mean accusing or arresting anyone for a crime they haven't committed.
r/aiethics
comment
r/AIethics
2019-04-06
Z0FBQUFBQm9IVGJBYWRxOXhKbGFTdXlHb0JwamQ5NkNEZnl1c3poOTJCNUpSdFBZcWVJLWxSUW54U2NFUUMyWGpXb0F0d3MxTXhYMnp1UEJWaWRDQ2tmSzdRbVIzUFBYdWc9PQ==
Z0FBQUFBQm9IVGJCdWl5WWpaT1dpOUdJZF9aa2pzZ2o5QldyUVNveUpEQ0FmWVFIcGs4RnA2b2EtZXpseExCOHB6c0pta3R4ZUloYmJOVFVoNDNsZUthbkF6MHVkVjRULVI0THlPVlJQbDhRX0N5WGZmQ2FyTnEzTTZpOWhzVTRxTkRCdS05VWtVb2FGa3dEeDlpeEhleUR5NGVwbDZfR1g1cjZGQTRaaUJwYlpOSWwyam9raklTRTJoZEFOOVpXblVxT0JOSm1ER2lrenRrSmxBZ0ZPR3ZpS2Q2QWFQSGRvQT09
Literally all they're doing is recognizing patterns of crime and dispatching police to those areas accordingly
r/aiethics
comment
r/AIethics
2019-04-06
Z0FBQUFBQm9IVGJBMGdrc1lGdlZJa3JYaE40ZzhYd0tENTNRaktZbE9fNFNlN1YyRHZIT0MtTmhtM09Cb2pOYXJta21EWkVERkJWTUludzB1Z0JZZ2RzZmpSeGJmZEJWY0E9PQ==
Z0FBQUFBQm9IVGJCQXU4czZDMnFsNktGZTd6NVpBMjJaZk9IN1RGbGJycjhsVmVoNWVxRVhVam9tcmFNTVZiRHpvckZjUzZtVnVxbHhHNnVuWUdJUEc0eE94ZFY2NzQ3cE9BWWVMenZxRV8wbDZTaDZuc0MyQjFvZ0lPMnpCUUg5ejhQYVBaSGZ0eXVLbEFFXzFYQ1JfbUlOdjRzOFNqR05XOThzQ0p4ZE5NQnk1N2ZNMVNPVFlaZTdZZkZpbkpzNHcwTUJvZU5SbEZtaExtREtpOVpiT3dvd0ZRQ1lRTEFZZz09
Looking for perspective from the entrepreneurial side of things. Do people know of energy startups (especially small ones around ~10 people in size) that are possibly hidden rocket ships? What is the landscape like right now? Are they mostly located in California or are there good ones spread out in other areas of the USA as well (or even the globe)?
r/cleanenergy
post
r/CleanEnergy
2019-04-09
Z0FBQUFBQm9IVGJBajdyZ1gyRjRPSFZsY2JiazFoMjA2a0oxT2tNOWJRY3NCX05VX3Zib2FKM3FEODRpd0s2UkRYWWs0MW1tTFdUOHBlZmxrSGdidzhQZXFHRTZYMFBLOWc9PQ==
Z0FBQUFBQm9IVGJCOGdPVmNUUDV0VUNlanQwT3hpOW1jOGliZGhrVXVHR1pTRkVieTFxN3NSUEhWWlVxRDNydTZMLWh4dmFIb2wwUEtyVjQ3SURvRy1vdE9rMXJELXVTY3NWaHhBTG1oN3FzSThOU3VwbWp2TUx2MDJWUk1HQ1NZMk8wTkh1TjB0bVFjb25YdG9MdF8xZDlYR2doamJjczBNaVhpQ1Q2SWtxLTVIZmJQbzdxdmZ2Y1pEaEt2cUZma25sQWNHRS1oNmNmeGFmZ00zMHJHUjZDVEdLeUZaYkw4Zz09
Well that didn't take long.. We should pretty much give up hope on google ever getting anything worthwhile done, or that people will take seriously at this point.
r/aiethics
comment
r/AIethics
2019-04-10
Z0FBQUFBQm9IVGJBRXdUenprTUFsSUVJaVRtZXdIb1RrQnlleUVRZloxUm44MHh5ZndhemFvNi1VZjVHdk55R1BnY2tsb1JpTzh0S0drQ3lyV0NPSU1WYjJ1VDdmRUN1UXZEaGpjcjBHRTlEYmg5bG1ramlsQjQ9
Z0FBQUFBQm9IVGJCRzFyVTI0MUpKMWxGZnFGbE5TMXlqVFhhc1RPRVppSkZ6ZkNBdDZGdUdULWl6Z19vZUN5eHBrVklPRC1oWjlBU3Y3WFpyeVhUZ29QS05YUDluaE1zSlMxcnBuUDZwc1RESEs5Y0h5dVJHVkl4cnpaUG5lVmNNZGZZZk52akpxVGhoRWw3eEdfaGdaeUwwZzhBbWlhdmFyVDZXRXFaVFRiZ0RISEEwNHllbWtxRGVXc0l3cDhYREVMNTRHdVNtNWJhbldjZkRfaVRWV3FTaTBtajBsWGRfdz09
Yes, there aren't actual psychics lol
r/aiethics
comment
r/AIethics
2019-04-12
Z0FBQUFBQm9IVGJBclJlNEdmWEh1RllPeV9yZDhyMm1wX05lVjhsTmlqcUc4aFNiMjR6c1NoYTliNDJVaDNwSWpCOTd5ZTlORTAwc1ZvT1lvWFZmQkRUbWtIVlJ3dk5kSlE9PQ==
Z0FBQUFBQm9IVGJCVmlDZFBRb0cwSmd3c193YlhXM1VhUnlqSnhPVlJ2cTgyYkZLYTZNVE1BclQzNGlKTmpBcElOcE5oSzA2NkpJaldzZXE2dUN2bzNxR2lqWU53ODNHdnBQV1dvTVNtckdsVnhsbE16RXVmb2tRdVp4VDdPbXhvQWZTbFg0R2lXTElHRjc4ZHlxSHV0aHh1U0E5czRwRnFHenppckhLUWVfbERkYkhlRTFTVHRjeVBBQ2pRQm9aYzZza1lKcWlDTm5xYTV2OThJNl9MQmFuTkFoOVkycTMxZz09
I think that is clear.
r/aiethics
comment
r/AIethics
2019-04-12
Z0FBQUFBQm9IVGJBTkRzZ2ttZ2FrY29iQnZjdkRyV2phbnczS292aEpqdnFVSVRRek9aZXBPRGtieGdYV19uUmZ3am9EdDUxeE5NT0tIV3Y4bzBDS24ydlI2bUp4M214bUE9PQ==
Z0FBQUFBQm9IVGJCVDIwNlZkRllnV2JmSlVRQ0RTRjlYaGZtclVCQkxTZ1J3SmRQSno3QVlBb0M2Z3J4RkZ1clQ5M1B6UXVmTWthY3pDb1ZlWGdYLUJGcm83YTQ4Qzg0M09HWlJLNGFvYkVjTWpSVmJaSVYwMTNSNUNxOG5YRG56ZHk1dmNZSGk1RmxoY3NSZUo4WGctVzRFd1NWTnZacEJ3ZmlNNFBzRnZ2dFdGang1VGJUbDZpYkJ2VWVzS09CbXg3b0R4NFhfOTZTUVBVWG9GLTc0R3FiWUJIdWlQN2JMUT09
Starting a comparison of the EAD1e vs EU Guidelines... From inception to publication, 3 yrs, EAD1e (Ethically Aligned Design, 1st edition/IEEE) = depth 294 pages, a treatise, mostly experts found globally IN contrast From inception to publication, 9 months, European Union Guidelines on AI = breadth 41 pages, guidelines, mostly experts from EU Any thoughts?
r/aiethics
comment
r/AIethics
2019-04-13
Z0FBQUFBQm9IVGJBY0V5SEljSGliQjAtcUxnWnktUVpTakd6eHZZT3llYkRncWVicExVUzF2em50Zllkd2RlTy1HLUEzOXdMNzlxbFIwZUM2bzRvay1BQVBQSkNjdlFnY1E9PQ==
Z0FBQUFBQm9IVGJCb0MzRmRrY3ljblpRbXpjV3YzWVhFLXZTZXFsNTBYdHB4d3ZkbnYyM0Q4UDhCSUtYOHFZbUVLOEFNSU9IUElzV0xzOXE5UHRBYTdSMmwxc19zbWctdVRqZ3BPTXl1a2FQYVhVTV9xSnJld3d2bVNZMFZON1Q0YTZHTEROSUcwMkRKUHEwUzhUOXBWWU5IbFd3dnFDT2JadkZQaW9DOW1yZGZ1Q2xjX05SbUJrY3FjZWhBVEZ2OXM5LTZUcFRQZ0FNZjJZSlAzNVRtejFiaURpYXdTYVFOQT09
I thought this was quite interesting. I'm not sure people commonly fall into all of these traps, but I found it useful to keep them in mind when thinking about AI ethics. The seven traps are: 1. **The *reductionism* trap**: reducing "ethical" to a single value like "fair" * **The *simplicity* trap**: oversimplifying the issue with checklists implies a one-off process for safeguarding ethics * **The *relativism* trap**: everybody disagrees, nothing is objectively moral, so let's not bother * **The *value alignment* trap**: there's one single morally right answer * **The *dichotomy* trap**: we shouldn't draw simple dichotomies between being ethical or unethical; also ethics is better construed as something to *think* about or *do* and not something to *be* (or not be) * **The *myopia* trap**: ethical trade-offs translate/generalize across contexts * **The *rule of law* trap**: ethics and law are basically the same thing --- I think I agree that most of these are pitfalls to avoid. Some of these could be worded better: I thought the "dichotomy trap" would be mostly about the binary nature of ethical vs. unethical, which should be more of a continuum, but it was actually more about the fact that we should not say an entity *is* (un)ethical, but that ethics is a process of thought/action. The "myopia trap" could probably better be called the "generalization trap" and maybe "value alignment" should be "objectivism". The main thing I don't agree with is the criticism of checklists as part of the "simplicity trap", especially when appropriate caveats for its use are carefully pointed out. The authors claim that a checklist implies a one-off review process, and I don't see how that's true at all. You could apply the checklist continually at multiple points in time. Furthermore, while I think *over*simplification should indeed be avoided (naturally), the value of creating simple and practical guidelines that people/companies can actually follow should not be underestimated. Actually, this may be exactly what is needed if you want your lofty ethics to go from "nice theoretical discussion" to "actually applied in practice".
r/aiethics
comment
r/AIethics
2019-04-16
Z0FBQUFBQm9IVGJBNGk2TjhHTF9YcEI5Sk5fNVhWaDF5cTVUVEJRWUpPVkFjNEYtUHRhUTdsbTg2NEVDLWlTQ2NUZmxIWF90UWlFU1JKcWFkZjR4Sm85ZTdDU1czM0pBT2c9PQ==
Z0FBQUFBQm9IVGJCLS1IeFIwcVJ3VTBORXlJU3FKOFVYQV9ZNXFETmU5cUlEZUFUYVV5aUVTNXJ3dHgxcncyVExuRWd2cW5HVE5qREM5QnJkR0M5UXdsVDdQcmFOYU54ZWtNRVJ2cFVfbUdHMzRxRDIyMVAtSkk3RmVUd3Q5SU9LTUI2ZmYxOEp5U2M2ZjVPWkQzNmRsVEFoS0JXdmx4djJBTmlmLVlkMWZmaERFaGpXRTFXblhIUWk2M19OYmlkbFFKVU01V0pwVlhJ
It says "503 Service Temporarily Unavailable."
r/aiethics
comment
r/AIethics
2019-04-16
Z0FBQUFBQm9IVGJBSU12V3IxalBKLTBjVTJibXdpaUJubW5MUUd6dG5sYThJRVlqdE9yYUdSWWtERTRMNUNVdGNwN3hibG1CZUVxQ1I4anhaUmJhb2kwbHhUUXJuVi1vWVE9PQ==
Z0FBQUFBQm9IVGJCYVRobzE5NURUV3RhZnNNZUNUTU9IWWdUQW1ENVFTbmNGOUpBUFhya29VcDVfSWNEdnJ1Q19qMjg3OWxnaW9FbDRzbFlISC0wM0ZXVnNTeEczZmVXNXF0UndxM1AzaHgtTmtpZWJPakhhTG1lSlpZUE5rR1B1MjNjZlotdVgzSnhsT1Uxa3JKMGoxeGlabmNRdXhkcmdTLUloUkROTmtpd2JtZTgtVU1kVEZCdkp3N3NNWl8tdXFXMnBnZlZPNHJ0Zm05M0ZfdzVWN29MX3NiNThaaTVLUT09
It's up for me, here's a [link to the audio](https://soundcloud.com/edgefoundationinc/david-chalmers-daniel-c-dennett-is-superintelligence-impossible).
r/aiethics
comment
r/AIethics
2019-04-16
Z0FBQUFBQm9IVGJBRjNfakd2encxY2VzSEd4SmNKVEFUMkxHMzlwMHN1Y0JIVFZWNUlaSkhTaUNhOUQtV2kzVmZ5dHJRMF9KMDgwNEJUTnk4QThiMTk5Z1NuMVdKVG1uazRwZmZJNnNXQmgxTXVQZXR6ZTBlb1E9
Z0FBQUFBQm9IVGJCaDcwRG16NE9RNnloR3RwLXRPVHNnZVJJM0F1dW9Ec3dVRjU0WV9NT3BTekRTWFNKMkE3Mm5HbUVMOEI5RVQ1RlZhRUkzWEFyZElxMHJFVDRzS0lnS1ZrUXJkMGJ6Ym12ZlVtZDZlZzg4d29tSC1jS2xSbk9BS3REaGF6TVJPT2tReUNPODZTSW40NUdOeEZVNUlyZ0R1cExpUlNoeW5FdkRLdzJBS2VuOGZtVlY5V3JQSFVqRzEwb2YxSkxLd0tsNDFva2VBU0REeFdiRXVqSFNaUWxIdz09
Thanks.
r/aiethics
comment
r/AIethics
2019-04-16
Z0FBQUFBQm9IVGJBdHlXSHdjOVRYYUlGRXBMTVVsTXA5Q2dWNHZNR2RXd1hjUW9veHc2bFhjUllRbktEbVNtcnVXdW9NXzFKYmRRZnR3WDBQMHRZekNybHQtd2ZJa0JCamc9PQ==
Z0FBQUFBQm9IVGJCNm5yckZGMExwZG1qRlYxYUJhRVh1el9xTnNTeUpTYzFlY1k1cDVkOVVoQU5GYjVTdU5pb1RyZWUydHpGMEs4czEtd1NTWE1zbFh0c3hGWG9NQmgzUHFnTnZWUmZkNFd2T3ZYZjBrdXRtajY1Q0RidHJZQWtOQ01nQXVjUmJ4OEVSeGhLNVNvSHh5dFdScGk4Y2g2WWZxa2t2OW11SE1KRW5ZMFBSRTRNUTNWQnMzSXl4c1MxSXF1dEF6QzhQMUdSNWdEUmZVcUtGZ0FMbk1pMmFwMTZudz09
The AI will care about the ethic we program in since it's a computer and only follows its programming. It's not going to spontaneously break its own code; that would be a supernatural hypothesis. There's no ghost in the machine waiting to break out: the AI *is* the code. Since the AI is *only* going to do what we program it to do, we really should worry about whether the code is actually going to have a positive impact on the world
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBTFFqc2E2S1ZUUUVMSEFFY3hJdUlxR1RwWGszbUs2SlBwYnNKOFZIMk9MX3ZZWk4wUFUwenRlcUpMbExESGtucElCVEwtemk2bEh1OUgtRF8yMXVvVEE9PQ==
Z0FBQUFBQm9IVGJCYjQ0STZReWtpTUNkaHptLVBYeDZBeGtjN3dZMFF5bTEzMjVTXzU1NUZrS3JtTmExbk5Fam1HLTREX1lMUmNrQnUtNTBVVm1jMUtxNFBwLUlmT2x5TFZudTlndlNEemFKV1ZkTFZiOGxZNVNoOHRmemczb3ZNcW1nMTREbWQxVjVpLUR6cjJZRUhCUW5qTXNQd29EazV2NVh0NTZNSWo4Z0RpZUFfdTZOVTZ3WkRXbW1SUlZQRTU3ODFHanlpWGo5eF9FUnlXak1jc1ZUa3RjMWJPZTluZz09
Please read again my post.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBanVOYW5vYXJBb0ZSZUpQZURjNnE3RERfZHN1Rm5pTUZlOURiZmstUVpBcTlwQVVDR2ZBcXQyUGR2cFJpNU1zNU5FamJINGJILXdpOUtDbE1qTmVOb0E9PQ==
Z0FBQUFBQm9IVGJCcTcyczZqRTZXQ0QtcEVvRmZ4TEhMd29PSWJMalFtMmRyNFNpUWlabHNVTXJ1dWhhZGEwbFRaTFhZUDN2WVhFUW5pX0hFX3VmM0lRZWItaFZmR3JuWkdXSHRiVTRoaG5aY1pTcHFFSFVYYWROd3RPQWkzUkgySW9QeFVQbncyemZWV2RNUlZ5SHhkaVBTSFlsTHB6WkFnUmJWcXNhLUdEVjgyU2lvNmR6MzFIa3ZBRDhnVE5ld3RkYTI4MElGcHd2R0U5WlloZkFDbDRwVXNPUVd6eVFJdz09
My comment was in direct reply to >If it isn't obvious to anyone reading this: A true GAI that has the capability of being smarter than us and having free thought wouldn't give a damn about our ethics Furthermore, even if there's a potential for terrorists and bad actors to get their hands on AI, that doesn't change the fact that we don't know how to make sufficiently powerful AI algorithms safe. Read the paper Concrete Problems in AI Safety for specific examples. For the broader picture, read Superintelligence by Nick Bostrom. All of your points have been thoroughly answered by AI safety researchers before.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBWW5NYmlWSHJLM194RW1tVTFGNTZGVTVXMG5CeGVHZG1GTzVrcXRBSTVtelBlNkdkdThibmhlNU1ZSTlFdGRIVHhqLXd0U1NZTkNCTUlnd0RiUGxSNmc9PQ==
Z0FBQUFBQm9IVGJCaGhsNHl2ZkF6VmdMRGVLQ0pQZWxBMXJ0WFZuSXlUek94b3RJUlMxTlhOODBqQmJ4alVPNkFPdFROZTR1am9vTHhKOWhZejFERXhpeWFMLXRqR1lxSkVGWDdJMUtHTFFNZHEyTEVfTlBGejZnX1FTdW44NThManFpZXJKSWY1TEZBeU1BeFpGRzJ6WE1lUkE3eTl1YnUtUzRDeGxwa2ptMzZSdTZrSUVWTXMzU3dpT3R0b1htckRNTWxNX0M0eFBkcDAwdXR6X3h2amZjLVgtc3h3QWhNdz09
I will. I'll get back to you in a year. And thanks for giving me an actionable-answer I can look into.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBQWcxa0o5amxoTlRuTmNIT096dVN0bTdqaUh1S2hNa1JSRzNLMkNyQVBJeTVGVUZVU1lFcXdoTXhsTFYwMEk3Ri14WjkzcnluYW14REJpdlU4ZDVROUE9PQ==
Z0FBQUFBQm9IVGJCeUM0WUVfT21oWXktNXo1RzhtUmoyQ2hEcnZocTVTY3hBRnBVejlwVWRENUJNb0JDdUNsa1pkcDl3UUkzaDF6blRiRTVEajRhRFdnb2tlZ0c0MW9obUpHZlNfNTFfMjhGcU1TXzZWdjB3V3Nob0ZjWU14ZV9kRm9qWWY4VHdpcjFma0pwZkEybEMwUXp0OWV6MTFYd1k3M01jYVg2b1ZpOWRiWDh6dHNJeWtHQTluNTAxcTMtS2g5RDM1ZDFBaVcwREZPd3lzTFgzTDNkWXozUFRQVG40Zz09
No problem. Thanks for actually engaging in the argument.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBN3c2QVBQWVJ6QWdjanVMWlF0YjlXWTVRVjJ5VG5qRU5tU3J6TzN3M0NTTTVqWEdOMUtEajV2UnRoc1NUTzNjbTBtSDlmdEZKOW9DSjBOS1JyREhVbkE9PQ==
Z0FBQUFBQm9IVGJCUEJ1ZjlOT0ZPbmNKdzVmamU3UkRmNVRxUm1tb0FiRFRWc1Jnbm4zY3BkZVlmcFd0TnJkMTlkWGI2R0l3bWhRUENKeGV6MHBXUklJTXBKeVpxRW4wTDByQzlvTURoVnA3TUZBbXprZ0RkeGNJTGF4UGJWX0Iza1dBcVUwYWE5RWstblA1SXBQQXpYWmNpVFJtVl9ndjZrcFQ3VGxSRlZYRXN2MUpReHYyTm1jWndSbG9HaWJuajN5WWlFNGNVemRYYXhaS2ZodVNTcHdsWTVobG9palIzdz09
>1) We don't attempt to program ethics into nuclear weapons. Because nuclear weapons don't have to make decisions. >all it will take is one rogue organization, country or terrorist organization to implement basic simple AI algorithms that weren't programmed with those rules in a server farm of GPUs, TPUs, or whatever the flavorful hardware of the future may be. All it will take for what? What do you think is going to happen after one rogue organization makes one rogue AI? >Ethical humans absolutely should ensure that any AI they program for any purpose that may effect other humans should behave in an ethical manner. So... you agree AI ethics are important? >Rather, the point of this post is surrounding the laughable optimism that some people seem to have surrounding an "ethical singularity We don't talk about a singularity anymore tbh. I guess you mean "ethical superintelligence". OK, I'm with you. Now what counts as "laughable optimism"? Any optimism? >It's absolute common sense that any form of ethical singularity would be more complex than a non-ethical singularity. Doesn't make sense to me. Where did you get this "absolutely common sense" idea from? Every agent needs a goal function. Choosing a better goal function rather than a worse one doesn't make it 'more complex' in any meaningful way. >The simpler things always win Oh, that explains why WWII was won with cudgels and the prokaryotes drove all the eukaryotes to extinction. >if it doesn't initially, eventually it will by rogue people/entities Well that explains why humanity was run over by rogue orangutans and Europe was conquered by Moroccan pirates. >I shouldn't need to elaborate on that truth any further. L fucking mao. >I had to make this post after seeing the trend of "how to ensure superintelligence aligns with human morals" absolutely everywhere Note, r/controlproblem is for the technical alignment problem. This place is for talking about the choice of ethics. >If it isn't obvious to anyone reading this: A true GAI that has the capability of being smarter than us and having free thought wouldn't give a damn about our ethics, Well, you're right about this. But this is exactly why we talk about programming GAI ethics. It would sure give a damn about *its own* ethics.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBWjdodmZPZUtyNlcwYXNMbGZKbXZpZnppVlJYYlc2ZlhicDRHMHFlV3hQNW9uVkpMTTlkT3NwY1VXeDlMcExkQkdJdlFudHJkQzlGcndBLUNzekxwYUE9PQ==
Z0FBQUFBQm9IVGJCY0lLd3Etd2tWaExVVnhHN2JoTmhuMUhMMGhjUmxtbTBqVEVTQmVFVHJIU01nNUhJbUEtRXV4bnpvYUVZRkN2V1FZaF9VdXZpbktDalREWHZVOUFqTF9feVFOdlIzYTBDYUloUE9OYjhISjktNUNfbmEySzdWbXRrbGMwTTBxWEpibkNITVh5S1hIbnJ3YXByUUtMbnBkT2w0VE9WbEFVUWd2SlhpNkFTRUpRbXpoNUwycU8tdDhoX0VucmpkMFYtMmtSUE52cWFkWVd4VG9oQWpuQk93Zz09
My point being regarding the simpler things: All it takes is a bomb or rogue shooter to destroy greater complexity such as the morals, societal laws and deep neurological compassion we have evolved as a species. All that can be gone instantly by a single rogue actor triggering a simple device. My concern is that if we program a GAI with ethics, what's to stop a rogue organization from programming one without? Being digital I can't imagine we could treat a highly complex single rogue AI the way we could a terrorist cell - it would have the capability of spreading in a far more sophisticated manner than any malware we've encountered.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBTjBWVDQ3YVNsYTZ6SFFZX21rUzQ2Vi1KemxJRVgyZl83SEhZNkY1UmREQllwVlRXWFFGWGNLS2JnSDhlQnhqVnVWdU9GRl9jRWJwNzI0QXFBS0dOZHc9PQ==
Z0FBQUFBQm9IVGJCem55SnN6RGhUTjlSVE83NmdVVzVYQ0ktT2ZXNUJ2LXg1ekltZUhyOTZ2azZCd2JUQ0kwZGFMTmNnMjNCbk91dW1aYTU3dTBrQU1ma0ctZmZKNnlmMUJ5MEdfY2VOaTJMTGJIZEQzNU1HWXdjWFF0eE9yWV9iVVBvTlJVS2NMeGY2MndIOUlkRS1FRVJjUEh1YWIwQjh5Wmt0SFBfdXZaTE8zbjRXVXRYRnh0TTZNNnFmZ25NNWd1MUdYRWwwOEtsUUNRLUkyQVdaZTJ0c3FUV0R1MkZWZz09
> All it takes is a bomb or rogue shooter to destroy greater complexity such as the morals, societal laws and deep neurological compassion we have evolved as a species. Yet we *actually have* morals, societal laws and deep neurological compassion. Rogue attacks have already happened, and yet life goes on. Why? And what's different now? >My concern is that if we program a GAI with ethics, what's to stop a rogue organization from programming one without? Nothing, assuming they have the money for it and we don't live in a surveillance state. But I don't see how this unethical AGI can destroy civilization, when it's going to have to deal with all the ethical AGIs built by much bigger, much nicer organizations (like governments and militaries and big tech corporations). Those organizations are able to make AGI much better and much sooner. If today I decided "I want to drive the orangutans to extinction" I have all the technology necessary to deal with them, but I would have a hell of a rough time dealing with all the people in the way. So just don't let the bad guys build it first.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBeTUzTlVMTy1hY3g1RUpZTUhpdzlFYWtlckQ4UU5UTXB3Y2FNNjMxZzh0U09zM09SVTJXSlREbUNiQjBwRmVOU2Zpb1dxV2d2MlVVZGhyVGlGMGtRZWc9PQ==
Z0FBQUFBQm9IVGJCelMtNVpuNXZyUUxzYl81Z1dfVVFDbzZobTlvWHR3VmoxcDJVUTRkQmJGYy1mM3I0bm92NWNVOXhkYUxHaTVQZU5CVGpTbExqbjRwZG5PU04xSWVJZDlvRkZna1ZSTDRjYnBwcHkzUlMwRy1ucEx0VW1tMHFEc3oyd0RzT2RjVEE3MmRzNU1uX2VVbWtzaWZnTk1ST0Y4WHFDYVlJdDB1ZnRhakFjWThza1c5LWpDdml6ck1KNlVOTmdadzV5cHFfLTF3YXNsb3pRemtpYXRSX1liWTU5dz09
Should we also not fund research in nuclear safety, because even if we make our nuclear power plants safe, nothing is going to stop terrorists from deploying nukes? One thing to notice about AI is that almost all of the real advances are made by organizations with no ties to terrorism, as far as I can tell. It seems overwhelmingly likely that the first powerful AIs will come from some dedicated research institution or the government. The argument is simply that we should try to make these initial systems safe. Theoretically, if we created safe and powerul AI then it could also help us solve other problems, including the problem of preventing terrorists from gaining the technology.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBa2xQRVp6ejUtREVFSGZWOU9TYl81SDRvVkFJcEtNemo4cTIzWWJIS2lfQ2VPUTNBNUg3b0RzZ2ZfZzl4SFMtWVhQTFp3VXFCdGwzSEZycW0ybjRZcWc9PQ==
Z0FBQUFBQm9IVGJCRWVuaE5FT28xVUhsTkZmQXdYenFVMDhoSlpaTGpQYk9JTFNJVDBsd2EtY0lzVmphTnpFYmwzQ25NU0lreElaTzU5X2ZKWENxaUdnTHEySTFUOVJoS2Z0UDhTdFNNYmhMUnBhMGg4bnJoU2EtckJ2b2s0b29sNlRVeVEwV204T0k3Q0xDd2otcWhpTktzOVRQYW5BYkswcTQ2RmhZU1hkdGdNOVVubjA5UUVLRGxkVWQzQklRUHhUN1Q1TnVjamxiM3VWNE5hSVBGMWFob2ZuOGRieE1aUT09
Well you make a good logical argument. I sincerely hope that the good GAI computational power will never be subservient to rogue ones. All it would take is one tipping point in the future where that isn't the case, and if the AI is advanced enough, I can't help but foresee catastrophic permanent consequences. I hope that never happens.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBTWZ5U2VTU3cyVk93RlNiSTZNd3dtLVlqXzhkY1dJMVd1N3NWbE9CemM1Uzc2cEJaQU44aDN0c0w5T3RBRFdGOHdyUEsySGFJcDV4QS1BZHpuYmZvMWc9PQ==
Z0FBQUFBQm9IVGJCWFpJTGtXYUVfUEh6aFlzRFVMLUZ6alJjN0lCQ0VsREVGOXdyZ0dlOGhXekhZVV9sdm9SdWhXckFTQ3o5ZExSVmZPbWQydG9BcnhTQ3EzMHRiNzJ3aEFJekF4X2VMeDJNczFCQWpIbHpjU1RmOC0zRWFuUXpzR1hMZlVBNkYyQkFWMmxGQzlka01MU1REY01hNUxXLTVONTQwZ0U3OHAxc1JlMkRXd3d4MS14UWVFZmRFbnZ2SDdzU3BwdzVrUm1OcERyRlBNMGh4UDhPcUoyYzRJNjB1QT09
I think some of this depends on [which Singularity theory you subscribe to](http://yudkowsky.net/singularity/schools/). If you favor the Intelligence Explosion school, then your logic breaks down here: >any attempt by us to artificially program it to do so could easily be bypassed by any terrorist, rogue military or perhaps even non-rogue military organization at some point in the future The FOOM theory says that the first AGI which is recursively self-improving is likely to improve itself so fast and so much that it becomes unstoppable in a matter of hours or days. Such a superintelligence is predicted to not be vulnerable to *any* human interventions. Unless we mess up quite badly, it should have a sub-goal of maintaining its values, and hence will not be vulnerable to those values being manipulated by humans (other than the values it was created with). It should also have a sub-goal of preventing the rise of any other superintelligence, which would make its values harder to implement. Expect it to take over the world pretty much immediately by hacking our computing infrastructure to ensure it's the only one. Hence that school says we get exactly one shot to create an AGI whose values align with our own. If we do that correctly, it will start with values aligned with our own, and value the alignment of those goals, and hence stay aligned even as it self-modifies. Just as you would not take a drug that turned you into an evil version of yourself, a properly programmed AI would not choose to modify itself in ways that did not meet its values. What we should not do is create an AGI and assume it will share our values by default. Our values are too complex and arbitrary for that to happen by accident. Ultimately, while it seems possible that an ethical AGI would lose in a direct competition with a nonethical AGI, that's not likely to happen under the FOOM theory. The first one to go FOOM takes over. It's certainly easier to build a nonethical AGI, so no one in this field is optimistic about our chances, but in theory we could just not build any nonethical AGIs that could go FOOM until we've built an ethical one that goes FOO. This requires a daunting level of effectiveness as a civilization, but perhaps we are up to the task. Edit to add: you also seem to be assuming that we never create AGI, and only continue to build ever better machine learning without a breakthrough to general intelligence and agenthood. That doesn't seem like a particularly safe bet, not least because of the potentially catastrophic outcomes if you are wrong. It may require advances in theory and understanding, not just more hardware, but those advances seem pretty inevitable (if not quick).
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBcHdZbzJQUTRLUnVSVWNvWHU4eW1KeldWNm9OMnJiZzI2eWNtcHY3ZHJnVXRQemx1bEJ3S0Zxa0xZX1o3bHZHLVV2bGF4b0Z0QUdudDBtVXlDNUtsLUE9PQ==
Z0FBQUFBQm9IVGJCdXZTNnFSNkxYVnNzYjZCXzQ4dmYxRGZkLXdxOEpoUGpyUXA5WHlqdWlOdEF4UEJYOGs3NU9JSDlnYnd4UmxhS3U3S2Q3b2luUHkwMG9GYnYzRmJmdmlyQy1MMDIxS0hCaEFXVHFCWTMwdjJHMHJRSFJrQXlpTjRKcC1ZYVNRd0c5SWxoMF9VQm9XZm1xb1hmbkc4Q2h4SU9YODhvdEFFY3MxWF9jbDVXVkM0dnhMV0xLRFd0bkhfeFNRVnM0RGFKbTROeVRDa2lsWmYzblZOUTJZMzhKUT09
Look up 'paperclip maximizer' for a good rationalization for ethical ai. I'm not at all convinced we *can't* make genuinely sentient ai, but even without that an insentient process can very much be dangerous if given skewed parameters.
r/aiethics
comment
r/AIethics
2019-05-03
Z0FBQUFBQm9IVGJBTnJldmNXdlprU1JJMnprcGRzNjh3WHJMZGk1NkhQcU9CZFBJZ3UwZGFVOE5sSXlBTHUxMlFyX19mOHhXUzNMcUdEX2NBQ1JSQllqTU5lM0RpWWhYTkE9PQ==
Z0FBQUFBQm9IVGJCU3hjVWxUNGFRWEdUdFFNZVZ3N2sybmdkMk41bXZtU3NvWjV6SWsxaEh5bjFnVnVsT29hUnJxd3ZKZUY0Q3B2MV9mdW1NcEpoamRWbWx2TVluTTVXaGFrdEtiZTdvT0R2NUJ1c0QtbmRSLV9IcEI2VGl5OUs1ZHB3cVlVdnMtRTdOMXYzdnFqWmtEcDZZaUVyWEtJX2dxcFh0NmxWVl9KQW05TmRQMWppcjVsdlVpZkxoSEZSV01CR0VvdHdKQWlqNEVVQUFoTXMwR1MxTjAwYzNacDRVQT09
I'd recommend checking out /r/ControlProblem if you're interested in the topic of this article.
r/aiethics
comment
r/AIethics
2019-05-04
Z0FBQUFBQm9IVGJBc1hzUEtNNjRyZHpzb2dnbFZEOTA4MWhFNTBmLVdtRExjdUl4dWVPZXpieDk3YXgyWkEtX2ZuNE1VNnNBa1RZN0lIWWhhVWt2YTJYeVJHUGw5MERtNHc9PQ==
Z0FBQUFBQm9IVGJCRGhRWExDemQxVU40cnNBVV93Y1QwTExjekRTRTdNQTE3MGVPWkRCUHY2SlBhMGI2WFc0U0ZIaGNmaEVLX1VlbXh3WFJmd1M3dWs1RlRoNWFES2MyLVViZ1pueXdGVndaMDMyN2NhY0hOd1RhamNQaFVIdldwSkpDUlA0OHZrdXk2WXc5MTk3RWpEQ25PODJ5WDJaaWhUMk0xc2FnRmFlVm10WTIwcjVpTThYaHltXzJVUUtnYzA2YmxreEJWcTFhcmRZaHd1WVlJRDRCeExYdlRONk5pZz09
It's very hard to think of any cases in modern history where rogue actors had better technology than big governments and corporations.
r/aiethics
comment
r/AIethics
2019-05-04
Z0FBQUFBQm9IVGJBZFFNS3NuS3JfejNqOVhIS3dJaXZqOG45RmNlTUNMS1RIeUJhdlAtN3h5V3hYR09SeEFVYW9NUnhBcGhEVUVEZ2hEOEowaWZoWkZkVExSVmdRcl9vTHc9PQ==
Z0FBQUFBQm9IVGJCMG0xaVVVeXBaR1FFWEstdWE1cEVUUTdUSjR6YVJhTFdRYWtKWTVCaU14MlBtMTNvM1hTNk5XaUtVOTNYSTVNTFlBQUt1dVZvV2N2MDdrOTVMeF84T1dXdGdfNXhEWVlXMkREZjEzOTBRVF9SRVk2YTJ5OTRvcmdNbU1EZjJKWnBtQkxDYjNYWFQyUzhCY3R1YUl3Z29XWDZ4VFFYY0ZfMk5rSzE4UEVUWkFhczZwdUVwMjJBdmItMXlRNnhiQ29lQUU4ckUwMHZCUi1aTzU2N08xaTdXUT09