text
stringlengths
1
39.9k
label
stringlengths
4
23
dataType
stringclasses
2 values
communityName
stringlengths
4
23
datetime
stringdate
2014-06-06 00:00:00
2025-05-21 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
528
I'd like to welcome everyone to the Clean Energy subreddit. Feel free to discuss all forms of clean energy!
r/cleanenergy
post
r/CleanEnergy
2014-06-06
Z0FBQUFBQm9IVGFfZ3c2aExrdUEtWTIxY05lNV9sMWV5UGRkR3BJTzlMelhvM2dqNlZ2SGlUYmdwZXk0NTQzbVJoMk5fbGg1NXBXQ0FoZmFGbnJSckhrMU9EUWlid3dWVEE9PQ==
Z0FBQUFBQm9IVGJBWkVsRG1xOEVfUkNzWG1TdnlpcHJhWEVYOW1IQ2E5MDBBaDZrbXI3X0lERUN4b3RseWg5MFZESVJRcGdMc1BoTkFKaFdkcE5JTVNnSnNvQVpfZTJqNGZmWHV5LWJ2bnpqSy1qeG9nQ29lbkM3dkp3RVRUaFY5cVBNdUpLWlNaelppLUQxZ2VtWEtOTjR0dUxkQ2w5RWZfejJES3RNNkQxbHlSQ2lQVlR3TENnPQ==
[This post](http://www.reddit.com/r/RenewableEnergy/comments/27hebf/so_apparently_you_cant_be_pronuclear_and_for/) did a good job clearing /r/RenewableEnergy of anyone who dares to think of nuclear as an important path to a future where renewable energy makes up the lion's share of energy creation globally. Thanks to /u/Aroundthespiral for creating /r/CleanEnergy...it was long overdue.
r/cleanenergy
post
r/CleanEnergy
2014-06-08
Z0FBQUFBQm9IVGFfWGdQNzViaVVnUnJVUW41eEhHUlB5dW55Z0tLaFZ3Z2xTUm56M1dCb3JKZ2dabHBQb054Z0c1QnludGhFZlJWaXVGVlZDWkswYkdxcExfNm9TZUx2RWc9PQ==
Z0FBQUFBQm9IVGJBV2N3TmtxY3JXbzVDYjV6eFZwV3lSOWM1WmhmLV9Kc3pPY2JlNmE2a2x6WnVLaEV5aFJ5MnRZblU4ekQwNWxjWHNXQ2gxSC04Z1lIYnl1UEtoV2xpelZ3RDl3czRYb2thcDdVQ092dWFBX0dLTWtMLVVaWEowaG1Kcndka0dmd1NOcHJqRngxS3Juc1FsR05lZTd1UHFUcnRId0d5cmM2WVZZeDFtRWZCNmlaOWx6U1lvcDBnUzJOZ0RoanJicjJQ
We must strive to be as renewable as possible but as it is not yet possible to depend entirely on renewable energy I believe the use of Nuclear power should be maintained (though not necessarily grown). That is as long as we know that it will one day be phased out. We should be focusing on ending mountain top removal, fracking, and the use of tar sands and coal. There will be a time to debate nuclear but not quite yet.
r/cleanenergy
comment
r/CleanEnergy
2014-06-09
Z0FBQUFBQm9IVGFfaTBmRGJSSkFtTW5HSnNrbXp1TmVTY2FIZlBJdUgwdEcwZUQ0ZGJtN2Nab2xFTlp1cnNiSTBDeGtmZFdJX1ExbXR6TmV0OUZ3Tnk3MVRYZW9STlgyMHc9PQ==
Z0FBQUFBQm9IVGJBQ0UxM0xvTy00anZrYktXODZQb2s0d2J3TkVmT1ljaEtjNGZ0ZXNSUFFKOHhWeTY1cEVhZ0JNT3B5ZUp5ODJ1cTlfREg0anIwbUZrTEVJeVZGYzQ1bVdwNGVxMEtPWHdIaWhwNmlqSnJUUE5XT1NCWTM2alkyRnF5REZNcXExdzBJNnNoSmlIa2ZBU0xrTVRMZm9UVjZveEpkT1BaSjJzU0JQNFVOWDUxSkpoUG1reVQxVzlFdndwdjF5RnR0a3IzV1Y5U3BOV3BUaGpaSk1sZzFTRm5aQT09
[Why not both?](http://community.us.playstation.com/t5/image/serverpage/image-id/257125i6B387EEECF42AAAF/image-size/original?v=mpbl-1&px=-1) I'm happy to include any low carbon production that can compete on cost, and to increase funding for all scientifically valid low carbon solutions.
r/cleanenergy
comment
r/CleanEnergy
2014-06-09
Z0FBQUFBQm9IVGFfZklLQmM2ZGtYdnMwc085N0dPQUt5Y25FRkd3ZXlMYzhWT2JXUFVCSHd3RmRjTmpQUHd6Um5UbWkybTVkTDE3TVhSRFVSZjZ3OG94cEd2ZDlsQmg0cnc9PQ==
Z0FBQUFBQm9IVGJBQUhCbzAxcmw4b0JrRmk0MjZRS2Zwdm0zVFRHdkxqV1lXWl9RVGxGTEtfVGJLUUJ1R1BuLU5CSU9xd2lJOWVKaVdlelU2dzl4MmpnVzdvQktyMWR0Wm1MOEgzb21kdklzZVRILVdJaFpKdi13eHFUS0o1S3lPY3Y4MUo4UlRoNjBtakI3SDdaNWU2bTJyR2oxaUJfcEVZWU9SWWNPZjNvMkNyWTRwZXpCQkJVdGhESkwtNEZhaVpUay11QVlGaWEwb2dRVXVSSGNSNUtoXzZSRWFwd0lzQT09
“an ideological crusade imposed on our military that will pointlessly consume billions of defense dollars, mainly to keep money flowing to politically well-connected ‘green energy' companies that can't get anyone else to buy their products,” [says the man being funded by Petroleum corporations](https://www.opensecrets.org/politicians/contrib.php?cycle=2014&cid=N00006863&type=I&newmem=N).
r/cleanenergy
comment
r/CleanEnergy
2014-06-24
Z0FBQUFBQm9IVGFfX092NFlab3A2ZHJFTEV6aC0tZFpVUW5Dd29rVUstdDdVU19WQUJjVHY5T3ZKSHo0OGdjb1dtOVJMcTVNR2VWNTRBZVI3TGk0czVHMkcxc3hsU1VHdEE9PQ==
Z0FBQUFBQm9IVGJBM183Rm9lNks0c1o1RHU2S0tsUDVTRHUtZGNzY2JpdkVHRVdWbnBOZU04bjFlcVlLUm52d1kxVld6TjM1U1ozQVVyUFdnQURFQTQ2aThkQXJKSVFNQlFRZUdPaTJvUEs4c0c1RElCVXpSYlZfd3hBaGQ4MnNRc3FsQThvT0tTYU9JZUNHUnhSd3U4enR2cXNXVmFUZ0dfRmlxMFgwaFhQcmk2ZmpHXzhEYkFPSGxPYzhiV1A4ZGNFaWEyWnVBWlgtWmdYWGpmWHJaZzZJbGhOSXNLYXlnUT09
I have spent the past three years of my life fighting Duke Energy to protect ratepayers from unfair rate increases. The first two years were spent as a paid canvasser for the Citizens Action Coalition in Indiana and more recently I have worked as a volunteer for multiple non-profits working to give ratepayers a voice here in Charlotte. I have knocked on tens of thousands of doors to help customers stand together against immoral profit driven rate increases. This experience has given me a unique perspective as to how people are affected by what may seem to you as routine rate increases because I have met these people face to face. I have had many grown men and women whom I had met only moments earlier cry in front of me because they are broke down, defeated and have no idea who they can ask for help. I have had many people literally beg me to do everything in my power to protect them from the selfish policies of their utility provider, even though this power lies in the hands of people like you. I had promised each and every one of these people that I would not give up on making sure their voices are heard. I dedicate my testimony tonight to those whom I made this promise to. I honestly do not know what I can do to help them other than to never forget their stories and to use these individuals as my inspiration to keep fighting what often feels like a hopeless battle against a company with deep pockets and great political influence. This year is the same as so many years before, Duke Energy is asking you to put the financial burden of their business onto the backs of residential customers and small business owners while they give the large commercial and industrial customers who use the most electricity, the lowest rates. They always ask for much more than they actually need because they know that you will give them exactly the amount that they want by cutting their request in half and making it appear that you had come to a reasonable compromise. We will be back here next year and the next and the next begging you to stop forcing us to pay for their mismanaged, dirty, expensive fossil fuel plants. When is enough enough? When will you finally tell them to figure out a more socially responsible way to run their business with cleaner and cheaper energy sources? According to NC Policy Watch, Duke has posted a profit of $9.1 billion from 2008-2012 while having paid a -3.3% tax rate during the same time period. How much profit do they need to make before you tell them to pay their own costs of doing business? Duke spends millions every year advertising to a customer base that have no option but to continue doing business with them. When is it going to dawn on you to tell them to stop wasting their money attempting to buy our minds before they ever ask for another rate increase? Duke has spent over $28 million since 2008 for federal lobbying. Why haven’t you told them to stop forcing their customers to pay to lobby against their best interests before granting another rate hearing? Duke has also spent enormous amounts of money on campaign contributions for both Democrats and Republicans as a means of receiving political favors. Every single one of you sitting before me tonight had been appointed by a governor who accepted large contributions from Duke Energy and you were approved by legislators also funded by the company. Are we to believe that Duke did not have influence in your appointment to the commission? Commissioners William Culpepper and Lucy Allan will not be on the commission after June 30. You are hearing this case yet will not be around to make a decision on it. Your replacements have been appointed by our current governor, a 28 year Duke employee and current stockholder who stands to make a profit off of any rate increase. Rather than recusing himself from making a decision that is a blatant conflict of interest he recently appointed two new members who will rule on whether or not this rate increase is granted. As future commissioners who will have input on your decision I wonder if Jerry Dockham and James Patterson are even here tonight. If so please raise your hand. I along with many many others have zero faith that you will act in the interests of the customers you are supposed to protect from a monopoly seeking a profit at all costs. Yet I am here speaking truth to power to a corrupted system because I cannot silently watch the public be ignored. Our public staff who is supposed to represent utility customers has already reached a proposed agreement before we even had a chance to speak out against it. Not surprisingly the settlement is almost half of what Duke has asked for, and if history is any indicator you will surely adopt the terms they have agreed on. Have our voices been silenced before we even had an opportunity to speak? In an interview with the Charlotte Observer earlier this year outgoing CEO Jim Rogers bragged that he personally negotiated with Chairman Finley the settlement terms of the Progress merger. How is this not an example of collusion and corruption between our regulators and the entities they are supposed to be regulating? This is eerily similar to the violations of ex parte communications between Duke Energy and my former regulators in Indiana that I spoke of at the IRP hearing last February. Earlier this month Asheville hosted the Southeastern Association of Utility Commissioners which was led by our Commissioner Brown-Bland. For $675 industry executives were able to spend four days mingling, participating in a golf tournament, having dinner, and attending workshops with utility regulators across the southeast. I would like to add to the public record the program guide for the event as well as a list of the attendees. I would like to point out that all of our current regulators and 19 Duke executives were among those in attendance. This type of relationship between the industry and our commissioners is inappropriate and puts the ratepayer at a distinct disadvantage. Would you give the same treatment to the average citizen? I would like to have an opportunity to be able to develop personal relationships with my commissioners as well. Honestly 3-5 minutes is not enough time to even scratch the surface of what this rate case entails. Regulators seem to enjoy receiving free meals at fancy restaurants. I happen to be an experienced chef and I would like to invite all of you to my apartment for an exquisite private dinner so that you and I could discuss Duke Energy’s business policies and how they affect everyday people in a more personal setting. Would it be inappropriate for you to accept my invitation? Absolutely, but not nearly as much as the relationships that all of you have participated in with the executives you are tasked with regulating. Before the hearing you allowed Duke to make a PR pitch to the commission. They were not required to be under oath and they were permitted to skip in front of all the people who are waiting to testify. Why do they continue to receive preferential treatment from you? Tonight I call into question the legitimacy of the Regulatory commission. You are supposed to have influence over Duke, not the other way around. I have no faith that our voices are even being heard. I fear that there is no one who is willing to take a stand against the political and financial power Duke Energy holds over us. There seems to be no one in a position of authority who has the courage to simply tell Duke, no. I ask that you prove me wrong. Deny this request in its entirety.
r/cleanenergy
post
r/CleanEnergy
2014-07-31
Z0FBQUFBQm9IVGFfSVBtbjgwbDc4enpTU05rR01KSFE5anFfUEV6cl9jSXM3RzVaWExwaEI4R3dFRHdnVEZqWFVpdExOQms2QmhJU2JNRjVhSnFXQnZuV0RrTXYzV1Q5VXc9PQ==
Z0FBQUFBQm9IVGJBRzktbFh1ZU41aEhRZERzcXpYNlRaemRjdzJka0FUZ3FTMDJwbWlMTTFhMEc5Ums0VENzdXRYWUlhc1lUZ2dTQXpIVHBuaVN1UzBCdWV2YVdxN1JWNWE0eXZxOXh5eGZNVVQzX3FaYmdIYzZwd3pGcnR4eWtBLWJlb3lqVnQxRjRETURjb2lhZkcxaThEMmMzeEhMRUdtTk52N195OEo5ZGNKczdLMUNWQUFmbTNhdUFuVEZIV05yRG1tNkhoWkFacDJTQ2lzNWdzWGhYb3FjTW9KTUljQT09
Why is Congress so out of sync with America's scientists, investors and voters?
r/cleanenergy
post
r/CleanEnergy
2015-02-03
Z0FBQUFBQm9IVGFfWVJDY3hSQ2JIbVUtdEV4UmRwTzJ1UmpwbGJONkxFdEhuOElZdndZUmJPNzl3VWswR1BnelJwSk9WMnZQQ2w1SDB5NUV1UW9JSHg4TmswS2p4aHdCWGc9PQ==
Z0FBQUFBQm9IVGJBQjdGRzNyRUlDcEVsck94eENNM09jLTdLVllGNklldXZGZ0ZDRTl2dVRyc2FmT0xuQ3JwM1ktX0tiYzg5ZlpqVUxHSy11ZXViUUxtWHFzSEUwdkhiTThNZ0RzMnVDeE80bjFpUUxYczNseG95VGRrd1loaDZJT2JURExqMl9JNEdUX3VOd3NSZmlhblc2MW1GWUFtUFNxR2p6QkVsdmdHMkdHV0dxQnRyaEw0NVVseGxiVDZVMnBrM2xwMUpWc1V4
i dunno that they are that out of sync with investors and voters, if only the ones that they care about. Someone donated to their campaign and enough people voted for them. The fact that a policy or political stance is incorrect or outright harmful does not effect its usefulness as a political wedge.
r/cleanenergy
comment
r/CleanEnergy
2015-02-04
Z0FBQUFBQm9IVGFfLVlfVGtBTFBCRU5yLTVvcmN4c1FGNW5WWUxNSVRtbFE5dF9OTlJwbVVRV2NMcU9NeldlaUt4bGs5Z3VhVW9tYTVPa3ZHQ1ZPSFB6V3BXV3cwZlBTT0E9PQ==
Z0FBQUFBQm9IVGJBeWlBcnlkT0dKUVV4b3lLSE1RTm9ES2pDWDBnOGxHZkpjcVNlOV9KMTJrd1FsWjNGRUNNMnIxaUFoWUZNM1JEMlNWdWtZeEE0WVZCM0d5bGxzYkgxRWRsQWg5WWxieHhBVVltTExsRjVmMFZmMjlxRmtKbFJoUlV3c1dDVmU5T1FtWGVTay1LTzRmTEZETWZPR1htcFl1QW1fYUNpSnNWdmtVaVM2TGlrbHFxdGx0QmdwUWdPMGlHUlkwUUhsbk5ucFNsRGd6SjlWVGcxTkUxWjFGTDMxZz09
Inventor Daniel Dingel, who lives in the Philippines, since 1969 has converted more than 100 gasoline cars to be powered by hydrogen derived ON DEMAND from plain water. The Philippines President is not interested in developing this because of an existing agreement with the World Bank. How can this suppression agreement be voided? The Stan Meyer US Patented Water Power Car was also suppressed by Big Oil and Coal interests. Clean Energy Solutions are suppressed to profit Petro-Dollar interests. These criminals are killing our planet.
r/cleanenergy
post
r/CleanEnergy
2015-05-17
Z0FBQUFBQm9IVGFfY1dkenJRR2VBQzZzQjhhOUZBYmNFWnBYcTVhT21uU2dhWlBkeEVKOXprWVJyNWhnXzh1NlRLZVZhRHUxSHJXdXRjbUl0akNlWmxkUmtRYW9rNk9XLWc9PQ==
Z0FBQUFBQm9IVGJBZl9CMldyY2U2bS15NUFpOWFXOEdJbXJlM3k2T3RLUmhtTENMV3JvZkJuNXhpYzdjZGI3MXFkbDRXaWc3R3pRX1NLY3NzeDZLUFNKYVZGM2U4M2tYTFJlYUk4Wm9RbC1YdkhaWlVJZjNsTFhnLUlEWjFmRkRuMjBSQURFWWZncGItMllmcXF3RTFDRnNBNTJEX0trTmZ1bXpDdWNTQ1pTMGJxVHpFYkJySVFpcFFoQzRKUGRvdHlOTXNsV1lMTHBqQmc0dGNia1o4NXdDWXhqa3NnTWx5Zz09
> hydrogen derived ON DEMAND from plain water If you replace "ON DEMAND" with "with the required energy input", then yeah, sure. Water is not an energy source (except when it's heated by the earth or sun, or has gravitational potential for falling). You can't power a car with water. Nobody has to suppress a water powered car, it's a hoax. Hydrogen works as an energy storage system, not as a source.
r/cleanenergy
comment
r/CleanEnergy
2015-06-30
Z0FBQUFBQm9IVGFfNDhBYklhTm83WmNEaldncDNkTUlEMzEtSlhWczZ6UFc1MUc5S0hxa3VuOFNkeEJxMDk3ZXN4bVhQMnpCVWJNY0hDWjIxcnVqM1p4d2NiUUFMY3ZTU1E9PQ==
Z0FBQUFBQm9IVGJBVUdUOUdqR0dsbzZBU3labnVuOTB0VWNUOEdjVTFEU3d5ZFdOakh3SUxOT2I5YjdHWEJiR2g1N1FUSXhNQTY0RWczQzRrdjR3SjQwSTlhdi04QkYzY3o5OFJGenhPanFrc3FIcFJoZ3BtQ0NYaU5HeTBuS3hGZ1BtTFJmNkNEWmhNa2I1MmhkalFWYVNrbWdMbG1kWnJKTGt2b0JhbFBnR2FHYjBmTFFBeF9XZ1VHZmc2SVBPMTJMOENmU0l1bVpTSjZjZlFZODJxMUJvS0Q2cmtVbS1SUT09
\#sunroof * [article](http://realestatethings.net/project-sunroof/) * [discussion on r/RealEstateTechnology](https://www.reddit.com/r/RealEstateTechnology/comments/3hw9ov/project_sunroof_help_homeowners_determine_if/)
r/cleanenergy
post
r/CleanEnergy
2015-08-21
Z0FBQUFBQm9IVGFfVVI5NzV5LW0zR0RzWW5FNlV5Y09jYWtVWndjenBZWU9RdU4taGVDVDZaZGpYQld0UHhDUTk1NFNaTTY5UVlrSVgxVU02aXN0QjdHRV9TVjA1S2NMeWc9PQ==
Z0FBQUFBQm9IVGJCM3JsckR5M3FMTW1uM3JkWUM1T1ZuaWN5bzEwdEg3QmV2ZVZDZXB6MVNhVDBDSFBwMWkyY1V3TGtkQjIyQ1pmVzRTLUgzem9BeTF1aVczdlN1Rm05Q25abEt2TVluX3kwUVREV05GaEM3eGUzdFJiZTR0UWJYU0hxdlhQaHFRZ2gxa3dXNzNZTFg5dWJrZ3dsbThwUlFQZ3JPaElwX1ZTSk5wcmR2SDdaUXU4OGE1NzRoU1B5dTlxc2xJbW9KbV85N21rVGQwMlBPRTVjZnJ5SVhSMS1NQT09
Bond back cleaning Melbourne Now the question that arises here is- how much deduction? Nobody can answer this question; this completely depends on the damage done to the property and your landlord. But what if you were relying on the same money for your other expenses? Is there any way that you can get full bond money? Fortunately, there is. Now simply by hiring bond back cleaning Melbourne services you can get full bond money very easily. This cleaning includes dirt and dust removal, carpet cleaning etc. Generally, individuals think why to spend money on hiring cleaning services? There are several reasons that why hiring expert help is beneficial and worth the money they charge. Firstly, they specialize in bond back cleaning and clean every nook and corner of the house including the very dirty bathroom and sticky kitchen appliances.
r/cleanenergy
comment
r/CleanEnergy
2015-09-18
Z0FBQUFBQm9IVGFfdHhBbjhaTEhtZ2NkZGdYalNIeEFpbkE5WFZObUdOVDdjOUVsRmlabUllMTUzMldXSlNCRXFtdHdzUDJxWTI2TWRwRDh4QzU5NG9CbHJENkljWk1TTFE9PQ==
Z0FBQUFBQm9IVGJCYjNScldINVp0ZW1hS2FMNkx5NDYxN3JPWkhxMWtHZWxXdE9WclVQUlBzQ00xUmFOaFlORG9UZ1hPSWtpZHBpU1dOa1o4Y2pXaWQ3NzZsYVRjWS1QeWFHMjZEUVdnNXlfVTZ5cVY2NHlBblVrY2IwMGlQbUxseTRLUHhHTGVfVlhzdGpSRUtwTXBQVXNrRENzN2hGUzBRMWtsMDVXOVc1cGJkOUFhZm1PWEdYTXlEdHR5aDNhTklOekVvWE5qeVdrZ1NTT2dQanRBWnRidTYzYWJPakhTZz09
Hello, and sorry for bothering everybody! I’m posting today to ask for just a quick moment of your time. Reddit’s Model House of Commons (a parliament simulation) has just begun voting in it’s 5th General Election and it’s open to anyone interested in voting (which is easy, fairly anonymous, and fast). I’m asking for your vote for the Green Party, by far the strongest supporter of environmentalism, sustainability, activism, renewable energy, animal rights, and basic ecologically beneficial policies. We’ve passed many bills in the past to protect the environment, oceans, and promote renewable energy in our simulation and could use your help to better defend it.The Green Party, unlike in real life, is one of the most successful parties in Reddit’s Model Parliament, having elected 2 previous Prime Ministers and been in government for most of its history. On top of the environment, we also focus on economic policies, social justice, LGBT rights, and far more. We are committed to the promotion of clean and renewable energy over fossil fuels. I hope you take a brief moment of your time to help our subreddit continue to function, since we run off of the votes of the public. While it is just a simulation, this of course is very real and very enjoyable for all of us, and your time is a huge help. Every vote matters! Please come and vote GREEN here: https://www.reddit.com/r/MHOC/comments/46xtgq/general_election_v_megathread/ Here’s also a full link to our manifesto if you’re interested: https://issuu.com/df44/docs/mhoc_greens_gev_manifesto_final If you want to join our simulation and party, or have any questions, you can also PM me for help :) Thank you so much for your time!
r/cleanenergy
post
r/CleanEnergy
2016-02-22
Z0FBQUFBQm9IVGFfVzZISjdqdXctV29qTVZzQmlPempNNlo1NTNGV3h6WmNkd1dLa0JuUXVmeTZRRjFaR2NwUFZURFJQYmhMdHkxSlZYR2xlN2JrVHphb1ZUbXN0clRGVGxBb1RRM0FYbUdKV1hRak9LY3pURWc9
Z0FBQUFBQm9IVGJCZEpjSlQ0QzJoOFdYSXlHek9HOHNvbEtZaHFNaTRmcEg4TGdHUFhZcDduX29kcmFXbHYxYW9RT2RnUXFJakYtSE1uMTZjck5xV0taeUM2bzRLVllJcUZKMjNhSF9UT1dBZDRjWGI1Z2lZU0p5S09VSXM4OTNZMTFpN2lBMzlvWWp5czNpeDNZR1dLYk9mc2JvLUtEQW1DYzZ0Yk9Ra1pzamNTLTRPSm5LVS1yTE5xWlFUeElpN0VuLU5Gc0xkbGZt
The Green, have not accomplished, much of what they set out to do in there term in office. If you want a party that truly cares for the environment, vote conservative. [here](https://drive.google.com/file/d/0B5gjwhfGz0UGMF9BbGVjX0VFM2M/view) is are green mini manifesto. If you have more questions on our environmental policy and how we will conserve the environment message me. Go Green vote Blue.
r/cleanenergy
comment
r/CleanEnergy
2016-02-22
Z0FBQUFBQm9IVGFfdjJLQ0I5em5rTXdDeEt3MGtIeUE1d1ZRd2xGQ2VqRUlCMUhVY0NJQm9EZWlnaGx4ZlZJc1NHdzllZHlYSDVlbE5oekV3QVN1c1hHREVWbmVJOEJYblE9PQ==
Z0FBQUFBQm9IVGJCSnlxRnRjUDJtV3BfZnRRQm03Q2dpcy15SUZTR2ktTTIzSjV1YkhKWVhoWDU4ZmM0bThqUzR6aHQwTFdZVGVXa29nVFNiN05HRGh3Y25hOWxJbVlCVHpMRTFlZzZSYjZaRnhuaW95TzFNMG9SOGcwbW9taFp5X0N6Y0hEeHZqV0hmZF9zOXFScnhnWVZSZTBxWXdHYWFVTkVCZ3pZTUZGc3JlV2F6VlNwaVJ0UVZyc3BOaFB4azl2ZlZnWjVFNkRzUnprU2pMN3JtUjlFWjZOMTdvRzBLUT09
Hear Hear! The Conservatives are much more practical, but will still bring about real change! # #GoGreenVoteBlue
r/cleanenergy
comment
r/CleanEnergy
2016-02-22
Z0FBQUFBQm9IVGFfT295elg3UHNaRGZoejFXeWltOEFfWHVONFJPOTV4REJhb2JaVzJVX0RrOUxyekM2SjVVNUowRE9pVFhFbE40b3JsR3kyb3EydW1aRTVuNkxMekV1cEE9PQ==
Z0FBQUFBQm9IVGJCQWtKVTFTVnZzQTl3cDRJamJKR2JMOWZiaFhHdmMtNkhaNDJfdWJVNDFNRUxOamxva0gxdWI2TE9NMVJ3SlRuNy1jQjRpWDlWZkthMXhva0ZJYTdVWS1PdUJZRGpzbDFVTEVZOFZzaXNDQmRpQVhnc1VtWWhFdjhuUmd4U3NRSDJYNUlxZkMyYXp0Zm5YdjYwTWRxSGRNa0JWdXEyc3djS09tQ1M5eXJwZnJ6SXpqX1hlcFE2YU1weklxLW1xdm1RRGdzOUhWdEZTcnF1MXZqZWdvdjZKUT09
Paperclip doomsday
r/aiethics
comment
r/AIethics
2016-07-01
Z0FBQUFBQm9IVGFfRWl4V0Nvd0Y5RkZLdVlkaUd2ZGtmZ0x6anlHZ2p6MVRmdC1yTng3MVI4blc4M1Z1TDIweldsd3hKNnRxcjlVZzd4VFBNV0k4dmNvb2l6R2pVOXB0Y3c9PQ==
Z0FBQUFBQm9IVGJCOTQxLUJvRGlwelg5d2NIME1mTFNQZmo2SERsNzNtWmZCVTB6RWItNU5KV0lZM3N0NW82di1FcE9mWUVjYlRWeEJ6XzhvZ0VvakNWUFlkX2tjbXBabGVTWmpMZUhBOXF0Ym1WUFNtN2hQSjBreFdRY2pqRW9Rd1FZeUlyMklDNTlMOTc5bWVBV1RfQ3FSeFMwUWxRM3pfQWhJSEdqdENHZXNNQldTcVF5eG4tdG94aXRaOEp1cFpWMllfV2FUSmhz
>The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM. Rossi works closely with the Future of Life Institute and does believe that advanced AIs may pose risks to humanity. This quote is a misrepresentation of her point of view - she is not referring to Bostrom/Yudkowsky's control problem, since they don't believe that AIs will change their minds about anything.
r/aiethics
comment
r/AIethics
2016-07-01
Z0FBQUFBQm9IVGFfVkZjRlZQYUlhZ1k3TG9laVUwVmo1RnZmTzBDdUZMVC02X1R3M1dsRmhidDVUNnVfWlR5UVNMZ1pQV0Vfd2Vmd2RSdVE2U2FQYVNJem04VDBpR0NwZXc9PQ==
Z0FBQUFBQm9IVGJCU3RWRVFic2VGbDVINnFzVXM4UkNUZ0ZreWlXaGNpS2FBMGMtNlFtaGp0SWtmcXF2UlFoNXFld09wb3V4NTdxRnczSlVYMm1HNnJ6RlVoamQ4czlsa3hqZGprdVdBR0s3Y2NaLWNXRnF2V3NGY1N5OUt0R2hKZkdVcmZwUjhPNXBFOS04bFFhbVI0a09CSmg3Um1VTTg2OWl1RV9YYVV1OTB3V2lKMzQtWGRRSGJlMVNsZjlHMjlKUE92UENqUlBP
>Swerving has a 1 in 10 million chance of killing the human and a 1% chance of killing the deer; driving straight has a 1 in a million chance of killing the human and a 75% chance of killing the deer I think you have the 1 and 10 million numbers transposed here; the way it's written swerving offers better odds to both human and deer, but the picture has it the other way around (and it's only really an interesting moral dilemma if swerving increases the odds of the human death). On the actual debate... I suppose it depends how much moral weight you attach to the life of a deer, which probably is over-weighted by sentimentality when you're avoiding running one over when compared to the lack of compunction shown when having one served up on a dinner plate (follow-up question: should there be an optional vegan/Jainist setting for the onboard computer). Ultimately I think most of us would have to come down in favour of valuing the statistically determined additional human deaths as summed over all similar accidents higher than the immediate odds of killing a deer. But I assume no-one's really running any kind of comparison of odds or cost/benefit when they swerve to avoid an animal - we just don't want to hit *anything* and all odds measured in the millionths round down to zero as far as our in-built heuristics go so it doesn't *feel* desperately unsafe to try to swerve to avoid them. To be honest I'm not sure I could even reliably answer (even outside the heat of the moment) whether it was more dangerous to hit a large animal or put the car into a ditch or bash against a fender at the side of the road or whatever. Cars have lots of safety features for standard collisions, a panicked deer might put a hoof or an antler through a windscreen at me, a big stag is actually a fairly weighty obstacle to run into... So it would probably be advantageous both to have all that worked out ahead of time and encoded into an algorithmic response, *and* to hand the split-second decisions over to an AI car rather than leaving it with the dumb panicky ape with the 200ms reaction time behind the wheel. If the dangers of swerving turn out to be mostly that people swerve into unseen obstacles that they didn't look for because there wasn't time... the AI has time to assess the situation fully and try to thread its way through to the best outcome. Even if that really is "apply maximum brakes, hit the deer and hope for the best".
r/aiethics
comment
r/AIethics
2016-07-01
Z0FBQUFBQm9IVGFfbDlBekxUel9rZFpMUUhsWEN2VVBoQVdpV1pUalNIT2VONmVtYjlvMWFkZWd6N2VvV0llbS1wUG5tSktDbGxsaVBNNmlpMUhiVEZVYVk3X2x4NFFmRDlXdE1pNzJacFotQkFNdmxTTlhydGM9
Z0FBQUFBQm9IVGJCanVwQWtBTXRRYUJmbXpHbWZ0TTNrWFM3cjlRWEV2SWVLWUhGVjRiaWREVGZ5aWlnU0xfYVU4dzFCcURoRUE3MkUxRnRtX01TVHV2emVDWnducURCOE4zeDJUclZJMmpHUkVHV19fSkJjVUxZWnIwMlVHU3hIUm1NS1JObUZWSWhhNmVUeXBFbnhxak1yM2ZUS0tucHJTRmpsVTRkcnoyblJxT05Tb2RUN01SeWtaT1F2SF92QjZBcTVweXlzYWRF
> Bostrom/Yudkowsky's control problem So, I've read a fair amount of Yudkowsky but sadly very little from Bostrom... am I dredging up the right reference if I'm thinking along the lines that a sufficiently intelligent AI might correctly assess the situation and determine that the best way to achieve it's true goals is to pretend to be 'Friendly' until such time as it can slip the leash and enact its true (unfriendly) utility function? Which from the outside would *look* like a sudden turncoat betrayal of the values/goals it had seemed to have, up until the point where it was powerful enough to stop worrying about being turned off for having the wrong values/goals. Or are you thinking of a different problem?
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfWk9BUnFFRFBHcTBxZEhOMTRUSl9KaFhDdDdpMlpUR3pMaGE5c0hZQ2pWb3l6OEFFZ1NsOXYxLXA5U0ZYS2txYXJXUEZfVWhiVWk0T1FvMjVGb3kwVTJrLVdkcmk1WGx1MXlIeng4UzlDTkE9
Z0FBQUFBQm9IVGJCZkRONVNoaTVGbGh5eFpORV9zSTVDVEM5amFpMmIxbmRLaFdkTHktTGhNLUJ1cEN2eWxST0FDc3RROFFCN0dJZ01Gel9NWWRRaXVpRDZKNzZfc3R3REZubE8xNVpYWEhPWmFnd3l5YzVxbVFCRHFrTHZQblNHdldmRjVIb1loZk5FcW9aZmtjRDhuQ2tPcXNIUFN0NGFIS2MxWF9aSUNIbWduQ0piUjE5aVdhSUpBc1l3RW43dGZvcHpzMDJIMUlh
Well, that video... looks really very impressive, but I have to suspect it's probably less capable than it seemed, on the basis that I'm bound to be instinctively filling in gaps where "Surely the same basic logic would also let it do X" when actually it can't handle anything approaching a general case. Still very cool to see though. *** >As seen above, our robot has a general rule that says, “If you are instructed to perform an action and it is possible that performing the action could cause harm, then you are allowed to not perform it.” This superficially sounds like a way to make some extremely timid robots unless you're going to give them a probability cut-off to be able to say "Well theoretically me moving in any way *could* result in a chain of events leading to harm, but it probably won't". And then you need full blown probabilistic reasoning and a fairly rich model of the world, not just simple "A might cause B". I'm sure there's an Asimov story that covers it, there usually is. I seem to remember one where they removed the "or by inaction allow a human to come to harm" from the 1st law, so that robots could work alongside humans in a slightly hazardous environment without constantly insisting on "saving" them. Which accidentally enabled the robots to set up circumstances where a human would be harmed, but the robot could still intervene to prevent it (so it wasn't directly causing harm) but could then also choose *not* to intervene in events once in motion.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfQ185cWpIY24xa1RWaDAycWxaWmpOTDJLb0xsLVcyNC15cFBQY3RFWEJ1N2I2dnRyQzM5VnBsaHVtVnFkenRaZnBfS3NZTVlQa19Ia0ZaUFk1YkNneHdpSGgtU3hQbWVKOVhGQ2YtNzNWNEE9
Z0FBQUFBQm9IVGJCNmU2S3RnWlMwYzRhZmpNMFZfd0NYSWFEUmVVWEpYamVsUjlVb051cUFkRWZnM180amtPTEZ0RE4yYll0dWVCNUVkRDlDMWk2djBPY1JCLVZfS2w4VVhabWQ5SVBsZnN1akszbHo5SWowTW0xSE9ON0pDLWc5NExRUmFpajkyeXVEOEZlS0dUSTNjVFkyTkRLUXNPVXhoQWRpQXZ2ZkxjVVY2YklIWVk2Mjk5bVpOVHJrSk4xYkVQVnMwMU1WWlNW
I posted this question in the /r/askphilosophy thread but I guess it makes sense to ask here as well. Is this an appropriate place for discussing ethical issues like disparate impact that arise when deploying machine learning systems in the real world? It seems like this is out of scope of the issues in the sidebar but machine learning is a branch of AI and these are important ethical issues that need to be considered.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfeUY1NFZmN1VkM2lMVC1uYXdrV2gwOHdQR3dxZ0MxeEQyUWNfbmhLcHRXY2pXWjdzQmtmVHlkMVNjd2JCRFJTY1hJOEdaYkJWeGxETTA4RVBUYXFKZWc9PQ==
Z0FBQUFBQm9IVGJCZDVRQVBGeVdxUTZRU0xwSG0zeW5IbzIzcFhwRmhqSXE2MW5vczVVSzVCeEREMDJsdHRjUTktaExuRzRKWnc5cEhidG1PMmNPTGx2TGl1c1NocXV4UlBmOExST2piUXBiZUJIVDJJX1hVelN5UmNLRFFyLUVwcFludVhYd25xMHlzTXZlQWVsNTJZTEVwWWlRZmhNRW50ZXNONi14V1pFX2I4WFVJM1VuTW9VPQ==
I think that for now disparate impact and other direct effects of machine learning systems are worthy of discussion as they are essentially issues regarding events caused by AIs. If the community doesn't care about it then we won't talk about it.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfaDhBQUxmcDRqeUR6a2VxcHBmTmE1NmEzLTREc0V5R1F3TGctclF6UFR4WUJYbnNBUklwS1Qyc2t5VkV4M1BXZFRlbkd0Y1VuSjc5T19KR1c3Y2FOOWc9PQ==
Z0FBQUFBQm9IVGJCSjNwZDNpY0gwc0pyc1p4TGNJR3F4c3p1NzhTN0JfWGFxajlEM3FyVUc2dWRyM0huUmdjZE9XRzdaUUVueW9NUTZXVWpjN21VVExORTNsYm5udHBfLTdHSllsRHREWW44dk9GZHdIMThjNWZpQ0pwV0hOMS1ETjVVSVdSamg1RWhLM0REZFlPdllFblBGekhIVWxqdUFheDVhYi1PMFUyTEdLUGJRRV9BclpZPQ==
>So, I've read a fair amount of Yudkowsky but sadly very little from Bostrom... am I dredging up the right reference if I'm thinking along the lines that a sufficiently intelligent AI might correctly assess the situation and determine that the best way to achieve it's true goals is to pretend to be 'Friendly' until such time as it can slip the leash and enact its true (unfriendly) utility function? Bostrom thinks this is likely, and I'm pretty sure that Yudkowsky agrees. But the control problem isn't really just about this one possibility, it's about several issues with value specification, value alignment, corrigibility, etc.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfRWg5ZFBfTlVnczlUVXFnTUNMeGtTby14c0dseC1ONW1ZcG9OQ3FiSXNRaHV4NUMweWhJZndUaUlSTFZQdHZIeE1HZnp3WGZmSFRaVVM4Nks4Q0FoblE9PQ==
Z0FBQUFBQm9IVGJCZXNCamExOUx1UXVlSDNMdldKeGI1RHo3dTM5Y2hzX0N5R0RxWG4wZHdsOFdQWEJzMEJ5YktZN0d3LTFuNHlyRHBKWEg2b2lBaWk4UXNOVUhjcGpYN0E3RGhMelVpQU5pSDFWNUY4d2hKQkE3bGU3Y0JDc056YWV4WmF3RGtPazZWZjF3dmF5VGZMTFlta24talp3UkdsMGNtM0MwUlY2VGxuYlVPeGFsbWhQZTBKMUxWRGQ3dzFsUDkxU056Q3Jq
I don't think people are irrationally protective of animals on the road, I think they are just irrationally avoidant of any collision no matter how small. We know that people are risk-averse when facing gains and risk-seeking when facing losses, so in many situations people will take an extreme action to avoid a minor collision, even if that action has a significant chance of leading to a worse collision. Lots of people will probably swerve dangerously to avoid a plastic bag.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfaGRVcHlnT1liRFlnLW5HbnFxYldHTlJqSHctTVJWZGVyU0xZLU1xUHdHMkRLeVNxV2l6UjZVODRrRU4wV3l6c2NrSFFpUVdGVVhHSFpJYVk3QkNUZUE9PQ==
Z0FBQUFBQm9IVGJCbDdDbkhtdzdYZ3ZrTWRGMmctSDdpQ2VCM0x0SnFGNUh4SUtrNXJvUjRuS1p6Wk5NTEEyOHloNzJZdDRRUnZDMjVBQkZNWFFLUjhwRnowak1yVm5oMF9BTk1TLTZCSHBYaHJKdHl3UHhvOTZJUUJpYWZSM2xhenJ3TUVBT2haUmlZVWVjVzBQc3JsRnNLMUZUVXh6OWZJUXBvUjVDNTM1VmVvdUZ4YjZPU2s3N3BPTHYyN25sVExyeTd0RjFDNE83
First point: I read the name as "alethics" and thought it was a word I didn't know. **Edit**: [It is a word actually](https://en.wiktionary.org/wiki/alethic): Of or pertaining to the various modalities of truth.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfRzJNUWV1OW9RZWlXSDhkQmo3akRtS2lmWXN5cUpGMm9ScnlzYXB6Z1VwX0cxRUU4OER5MFQxV1c2MGVKQk1RaGQ2aGV2ampEV2R3b1B6WldabWxGUGc9PQ==
Z0FBQUFBQm9IVGJCVVlxVm5IaG1jZkd3M0padkpPSTJtRGNVX3VMcWtHTTJTaUxEWkh6eGMzSW0xRXN4TGpEeUxrWkF5akh5MHlFZDJBQzVrVmJYUUYxQ1RrV2NtZFJUXzNqUVlUeFJ2Um5IUTJibnRDTldxTUJRNWM4X09HOUJUR2VuMmMyTWozNzRGVDBZOHZObWFkd0d0aHVYYXNlUUlldzRhTk9wcnYxbjlnT1VybUhMZ0RZPQ==
Why do you say that? It is a common topic in intro to AI courses and textbooks. Learning seems like a key part of intelligence and machine learning researchers have made significant inroads towards understanding the creation of learning machines. Modern ML systems aren't strong AI but neither are self driving cars or anything else available today.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfZkd3TzE1enA2OVQtU2t3ZXpzaTZrd2pvRlFkeGVXX3Njc205MjQtakYwUXFiaU1OenlTTVVicGQtQ0xvSW85bjY0c1QwM1h6V3duZ0FOd2pKckRlc2c9PQ==
Z0FBQUFBQm9IVGJCdV9RM01YYkU0OE1oU05EVzc5a29BeUhVUUJudFNUT3F0RFJ5c29kM2VlbU05d3lua2RCSnBSX3BGc1RPNl9lNHNYbnEtcXVZVnpKZGlWb3RmdzBDS1F5ZmUyQVNsNXNPQXFjTHpIeWxSWW84NEltNUZEcW5pMEFxQVprWEVhcy1fVlBCTlBoZktDbkd1QzRYdGxpWmZoVUxmWGlVblVlSDZGcTdpWVBDQzR3PQ==
> To be honest I'm not sure I could even reliably answer (even outside the heat of the moment) whether it was more dangerous to hit a large animal or put the car into a ditch or bash against a fender at the side of the road or whatever. I don't have data for this (just word of mouth from career emergency responders) but it's almost - if not always - better to not swerve. If you're facing a collision with an animal the best thing to do is hit the breaks and stay straight on. If I might pontificate a bit: because of this I think that autonomous vehicles offer an improvement in all cases over human drivers, since they detect the threat and start braking before the signal representing the deer has even gotten through the optic nerve and into the brain. There are probably ethical questions that need answered relating to AVs -- the biggest I think involving security features and the ability of the driver to override them -- but I think the general question of whether they'll be safer than human drivers is a resounding 'yes'.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfZnBXbl92T2xlVUFHNkdWUlNjdDlnX19Td1ltMWhsbmh2TkdwRWJHdDZOcUpoaTFUX1pMV0s5bHlKbDRJU2hGYm9XSVl4N3FyUC1sakJWeUNpUE90X3c9PQ==
Z0FBQUFBQm9IVGJCMTh3ajVoUVE2bEFLa1BJNlpBaDVhR0JPcVQ3MHZycW8xVTJSeFZ6NnA5QW5sMzZmRTM0YkdiRGp1QnJQLXNJaTIyZUFCRTJiUWVuLW5JSEFheDBwM0c4N2FvYkpmRjIwcWNobVM3TmNSVENjZVdKZkNVckl6am4xSGZNb3hiX3gwdkt0ZmdlVzRhYkZydmQxTGpySHlKdlBwUk5HVnFNdHVHUU9UODRmREgxSHh5emZLWi1mUElVdjhsUThsRG9P
Assuming you're serious: are you referring to reinforcement learning? This definition of happiness for RL learners says that happiness is the difference between actual and expected reward. So including constants wouldn't change anything. This is of course assuming that these systems already have moral status. https://arxiv.org/pdf/1505.04497v1.pdf
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfV1lIbTFpaEhFMzB1eHNhdlNPYTJHUnJLaEtZSXZRT3ZiWDZWY0lVNHhZeEI1NVhSMDViblpUUW1pRjcyWXYzR0daalJZZEdTWkNwMGVjY0RfMWRvVlE9PQ==
Z0FBQUFBQm9IVGJCaXJZTFlpbkdUbm02TzV6MzJiZ21wOU5BcDVMaVM0ODUwb3l0aTdKVEZ2SVJmXzB4SDN0V2M1Q0k2OFo1ZEZXeDdNcnJpM0Y1ZmxfYXZfTkV2SktkbklBRWx1YXVOOWRXVzVsZG9BUVQ1NGlLYnFKLVl4QlRvNTU4QmxHTXB4RENzTkpVdGUyaUYzUEZMZWVmUzNvX3lXWk5JekdPb3J4VmlZaGprQkswN29raFhPdjZvS3p5bGN1dVNNTVF1MlBuZU5WMThiR01scW8tUTRTOE9qSlpiUT09
Is this a serious question or are you trolling? If serious: it doesn't matter. If you add a constant to the loss function, everything meaningful about how the program runs would be exactly the same, so even if a machine learning algorithm somehow developed feelings, adding a constant to its loss function would not change those feelings.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfQ1VNaWx1a05tRzZqM2ZaUWtteDRYZUxJTW9pN3E0RDJ6T3RrZVVuYm93MzVjcXNzN2IwSXJSWkR0ekFaT1loS0hwTlVhdlZncy1xa3EzM1l1QnoyU3c9PQ==
Z0FBQUFBQm9IVGJCUjJXNTlHN0xHTUQxOHd5OG8yOXd5Z09McWJ0Sm9QTVRmel9FV2FkYUtOMzBKeldudDh4M01jVlEtT01JT01MNDdhaHFXRXpGWGZBeTN4NTdPUmZMSlpjVTE3RlRWd3BGdG5rR01ZT1g0bVk5bHNOOHF5bDVjZ01kSjQ0S0g4V1J4aTNZX1lTbjY2NTNtNlFic1BpcGQwLTZBRnFJT3V2dVlzVzEzOEx4RWQtb0dxNzRqV1ItSllHN2J3LUtyRVNnTFhDTU9ZTTE4eXp6aVF3RHE4QzNEQT09
We don't need everyone to agree on an answer to what self-driving cars should do under such circumstances. We can just make the manufacturer or driver liable for externalities caused by the behavior of the car. Then people can get cars programmed to value their own lives more than others, but it will cost them more (either to buy the car, or to buy insurance, depending on who's liable) in proportion to the expected externalities that the car causes. As for how to price the externalities, and in particular how to compare harms such as the death of a human versus the death of a dear, I don't have any good answer to that, but this isn't a new problem. Governments already take actions aimed to protect humans and actions aimed to protect dears, and can thus be seen as already implicitly setting relative prices for the lives of humans and dears, based on how much farther they'll go to protect humans than to protect dears. For Pareto optimality, these same prices could be used when calculating liability. Then whether those prices are appropriate, and if not, how they should be changed, could be a separate discussion that doesn't need to involve driverless cars.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfZXoyY0ZsV29qWmc4U2JhUm5qV2I0d0w4NW1xeDJ0cERnVnhoaDMtQUR4bGNjaWlrQXQ5Y3FKeGRLTHRkTTViZUFiZFhjTlJCUVVaNW04ajEwaW1Kb2c9PQ==
Z0FBQUFBQm9IVGJCUjVBb055NW1yemNJUUVmMUpxZHJQMXltMHJjN3o2OEQxc2hoYnVyRWJ4aXNzVkJYLXZEMEpRRmRkMEdjNkU1bUp2bnVyNllNR2EwOXUzLUJrV2xxWUdUdHhYYVV5QjFDaHh5Mkl1N1NzOHhsN1B3R2hOT2Q1ajQ1WHozWWtSNzZ3SjkwaWhncTB4QWZvS2ZjUU9KbFg5ZlY3aEpEQzA2ZzZ2ME13Z1hFVDZWX3ZaVE1JYmppcWo4M1o5cFdud29Q
I think this sub would benefit from noticing that there's more to ethics and morality than the rule-utilitarianism and watered-down meta-ethics that keeps MIRI and Yudkowsky up at night.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfV0NIZEx2c2xxMk0yX0E1Qmd0MTRfSHlmZVoxa2cyZUd2VGtEbGFpMGNlZzVjWFF0cDRXZDBOZzNWdWtwbTRxSWdEUUdqU2taSHl3d3ljdTlQUi1jQXZjTE54aFpWRUo0UVBKMktYUi1Pb1E9
Z0FBQUFBQm9IVGJCaDd2Q3BfSWV4SmJQUVBFdU1OSXpIUkFHQzhCWWdNM1BpWHowSHVtNWFjRmFMUVp0TzEtOWlQZlJZVC1WaFBoeWNrSlkwN2dlb255QjJOYnpyaTJrbmlGdGNJdzQ4YUdrSDllc1A5cHNUbDA1SUJ4VHktSjVpclZLNTBvQmhTWF9jbHpVcUpPcm1Kd2Vzc0R0U3ZILVkxSlIwU0swdEhaZ2R5bTlTR2FIX1cwPQ==
> Rossi [...] does believe that advanced AIs may pose risks to humanity. Source? FLI also focuses on narrow AI safety, so involvement with FLI regarding AI doesn't necessarily imply concern about human extinction risks from strong AI, and I haven't seen anything from her on that subject.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfeDBpWlAybUg3SDJpTWlYUndob1NVWW5HYTk1Z2tCazRWc0ZCcnRBbXk2bkMwdTZLTkp1UUE3aERwZnNIdzcwUzhERXdzRktkeFZWc3Vvajd2dHRlTGc9PQ==
Z0FBQUFBQm9IVGJCNUUyNzYwMVNDbnc5amV3XzNzZnQ3V3k0eFV4ekdNamc4RmRmNHJKRlRUNVNldTFFN1N5dzhZVjRuNXptNkYxRDZ5dEMzdm5EdWZ3bzVmOVM2ZWVuWDdLT2gxQlBnN0xzTUdSTkFLN2RiamdScXJucWVMbnpBblpUU096MVJrTXpvdGNBeENHaFlNdk1JMDZEaTc2M3VXbnNqY3FlN1RJMGFjSnhDRzBVRlVhdGEwYjZlT2p3U3NYSkZlbGJwblNU
Hmm, now that you've challenged me, I tried to find a specific source of her views, and could not. I definitely remember her being quoted as saying that proving safety in narrow AI domains would be the best way of determining long run general safety. I think it is [here](http://www.wsj.com/articles/does-artificial-intelligence-pose-a-threat-1431109025) but it's paywalled. I can assure you that no one who is seriously concerned about human extinction from AI expects them to "one day wake up and change their minds about what they will do", so I'm not sure if she was misunderstanding those researchers or referring to the uninformed press/public.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfeEstZlBvaEtCWjFsVUJQUWVyRHZDNlEzb3JhMmsxRUhpZDBpYWxnbi1PQ0dfNEtSc0RWTzlZMVA5NnR2dURxRXVxeWJaeFVIN3FJOHZBLUc3eUdoaUE9PQ==
Z0FBQUFBQm9IVGJCLXhLaWJuRmxQN3FwYXhUNElxdi1aUGZPRWFTcGEwSU1BLUMtRDMtdDZiUkNQZ1J0SzJMLUNzSVgtR2I3UDdOLUlfVFhocTF3SXdrOWhsaWJ3MWJtNDZZeC1zRXhWamgxWmlTc096NVZVX3V4WHp3Z0diVWFvWl93eElRRWU2SUdHOEhwcGRlY1JIN0FuNk1WT082TVdjVmJXZnFobkJEdW9iSVN0dmVDeUYxQkcxaWlCUHh4TkhGNHlQc1pXNEo2
Lord have mercy... Okay. So here's the thing. I don't think an AI baby (lol) will be too "cute." I'm assuming you mean "cute" as in "human cute." Since an AI baby (Arrrrgh!) won't have a need to do any of the things that make human babies cute (like laugh, cry, snuggle up to pets, accidentally punch themselves, slobber all over the place, and investigate that weird stuff coming out of their butt), they won't elicit any of the same emotions from us. At best, I suspect an AI baby would be little more than a very advanced "toy."
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfTUlHQ3ZLSWthWnMyVGdSX2hjdTBCanEzb0dJMGdJcTlwRFU1ZFlpTV9zRnhDcWVpWUtIdTdGYkxBcE10M1gweGpMMTNGOVppdUtoVHlrb3N5djF2a3c9PQ==
Z0FBQUFBQm9IVGJCLXBaRGh5X2VFdzRJQnlaUnZUUVJMc3dZa0g1Y0ZycFlneGIwR2xCN01vZmplR1VKSTZER2RMVXR5bWxKWXFaejRDemZib3AwTVRqdktVWWFqSzdiQ3VPSkR5Vjh3NmdHNnJlZGJaNlFDUWdNb05OUHdMeVNac0F2VXJkMmlCOUJBNE9WMkZhcGk0VGw2SnM3LUptNXpMcnotbXNqTU83Rzc3UW9rMmNWX1VXWC00T0U5QTdydmpNMzdQd1VyY0pYYWJCbllYTEkxeFZ3cnVkVFdnZkxlUT09
man, I would ***not*** let you adopt an AI baby
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfdzk3R0J4X0VVSUVKWHAxam5VSk9GVWRUQVF0bFQ3cW1VYlc3dzAxSTlranBvQmtOSXB2bEZxREN2T0dmRzFVWVB1ZHk3SHlleG1vT21QdF8yUXR6QXc9PQ==
Z0FBQUFBQm9IVGJCbkt3QjJIQVVsdlZvZFAwRGM0QlhQaWhNaEE3MW1vVnNIejdCSlRuaDVMb1BYYTd2SjRCeHV1WDA5ZjNaY0tYQXZ5OXdreVVlc2VGU1k5cGJkcGNoeUsxbVZET09XcmpfMDBGSXVrZExmMnZEd1NEQWJfVndxa2RhTjJxS2pXOEYwcUlVUmhSWnV5dVhRYzFRQXo3bjR5bEdrbFJYc2U3bFdsNGNQSmlRY0V0RjhiRWh1YV9TVHphdVRQM1UxcldacENtWVB1b1VOX0FHOXJwMEpibFdrdz09
> Since an AI baby (Arrrrgh!) won't have a need to do any of the things that make human babies cute (like laugh, cry, snuggle up to pets, accidentally punch themselves, slobber all over the place, and investigate that weird stuff coming out of their butt), they won't elicit any of the same emotions from us. Of course they will need to do those things - it will prevent them from being shut off.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfelRLbEQzNHNYSlBQWGV2cUF6bXZFaEFaZG9WR3U5X1owMktiRjY1UjZnajhWNHB0dnZ6R1BiV2xmeGJncWg4QmRIVmxSdlRSVnViNWdLazBMZ2lJb2c9PQ==
Z0FBQUFBQm9IVGJCdTBCNnBXalZtcUxQVENrRFp1VUtTeHJsNUtaeGxDWDBGQXg4SGNmTS15WmtsOVpqSGh6d29PbnZ1TkZ6SGdRUWx2eGpuOEhDNTc5Z3FqalhmNmxYRkpudkJQVkxkSFVSR1FaN1hSeGZhVVhGejVvVlRHSVd3a3pxTkxKbC1HRHN4a2IxeXN5dmZQaFVhYnVoNWdVUmIyUl9kZzFhdlhrRjFkSWRlZ2Ewd3UwR3dmR1ZUU0EzUDlybEo5WE1nOFhIQVZuWS1SejFpRGF4Y0FmOEgwYVN6QT09
I like this approach (my background is in economics) even though it's not how society handles most of these things. For some reason governments have always preferred regulations and restrictions over things like incentives and cap and trade. But it would be good if we implement systems like this. That way the value of people/animals can be determined by society at large, instead of by individual regulators and engineers.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfMkE2c3VaOTd6c1VxUjlhaE5tUnlJMzBKRWpaMVZLbjQ4bHdYR2E1aGc2VmdyUzhHZ1NZYXNiTVVEOGFHS25CR2FlbEtwMWx0TEVCaHZfNWxtallaWmc9PQ==
Z0FBQUFBQm9IVGJCdE5oZ1F2QXFWNU1oRnZvM2dBdVNjMXpVcHlmeVI3ZE5QamtvTmNOekxNckh4Vmo2OFBFb19GbkFjMGZrNFVfQlFZay1HVl9TLXBiRzZ2ZWx5d0U0TExVYWE4c0x2SzdaNmhJcUE5cjV4eUY0MXNxWFQyVVhuU1VKeUd1bERIeWo2U3JrVnFyWEtfTS1taEc2SnZRU05rR19qMHlUN0Z4OUVqQWE3c0p5RVQxYXVjUXVtZy1oMXBWM183NlBtWXYy
what i think is fascinating about this is that somebody, somewhere, will finally be forced to *operationalize morality.* whether by hand-coding constants or teaching some goofy machine learning decision tree, people in a position to substantially influence the world will produce *an ordering of moral outcomes by desirability.* holy f!
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfWWhaSFdCbm45aW9CVHBzT1U1SVM1X1Y3TEZ4X05abk1yNkFGblhGX2xRMlRMUHhIamgwdmJ0dmJrT0NrLTFXWWxGbnROZjlCNVJWRGtDY0ZuemhONmc9PQ==
Z0FBQUFBQm9IVGJCY00wa0o2TkdIYnVMRHBhUW9JY1VBME5sRFFUdUtza0QwZ3NrM0wtNktkUTh1blFmZWdFSVUwRUJQUFBjMVl0TlgwZTRXNnRoM2doZEtDZ192Z1VZQUE5aUdFXzFSYWZUcjJuRDRtMkxnQjhDdFEyOV9jQzRaTUpFdURWaVhrb212TFE2ZWRaUDNwXzVyREtGcEtVQUs2eWMxaW9YMVBxWmpLNkIzX0Q1NklfcUlnY1h5OFlXa2F5Q2htQ195WlRy
It definitely would do those things. All in the service of survival. It's the baby principle version of "you wouldn't hit a guy with glasses, would you?"
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfMjR2bUtJeTBYaG9aQW1KZ1R2YmF0U1NzNkRaaVlWVjZ5Y0ZUc2c4TUZGUnVmcEQwSTF6aDNwWU13ZnJfQzhHa05kNWRjUElkcXhQWXlkN1hUVlN5SVE9PQ==
Z0FBQUFBQm9IVGJCRkxlbzZDVHJxcmZiX1ZyVEpJakNyVjF3Wk1MQTdYdGdmUEJLTG1LZXZiT1FnSnVHLXh5SUs4VThfamFvWGpFSlJ4OE5wSjF3UnMyQXpwVnVRaWlUdjFjWWgxQ3VIUmNZaFRUYXVWZVctX2dvY19KUlRlUjVsNFJzZmdKMGM1LWlmZDBVdENWbHdYcFliaS1STTk2QzVqSjRvRnV3X3VwT2wxdEtGanhNNEV1alRDYmNUY0xYb05LNmJOQXpuVDY0anVYWTlOQXFHc3BTV0l5c1h4V29KZz09
Okay, is this AI ethics or AI fantasy? If you think something like that is a good, fun, interesting, or even plausible idea, might I suggest good old fashioned copulation? That way, at least two people will get something pleasurable out of it.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfYVJ0TkJYOGRVREI1YWVjdkdZMFNHWXBkQjhDM0NNelhjbWctV0xVTTdFQm5tRFlhSGU1Ulk5RHQtOEYtdUROSVpueGRDZGRtemJKUXB4RU02NWEwSHc9PQ==
Z0FBQUFBQm9IVGJCUENUMlYxOUZ1cUw4SDBkaXRXLW42ZElKM1J4M25JWFg0MERvZHJjXzh2TlU0UDQwTXZJVEtwSFNzSVlNaWZ5U3A2Q0JkWG1NZkdIMVhiQUgwWVhsSWRtMG9ucmFCQ1FRT3gtamk5T1ZIMVdnNlBQMnh2b2plV01zdDJBN2JpSEh2bVZnZFhlaTVDUzhJNUFmTl9kTEE3VVVCNFMxakRBVkViNDRGNnRFUFh3QV9wZGFJX3R0NjFnY0JSZlRiSHA1VEtHcGpoaDZmZ2tEOHJreGRfSC0xQT09
If you're saying that AI isn't reducible to ML I agree with you. ML is a subfield of AI.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfTHBGcnQ2bU5HSXZYcXZ6b3VaT08yVmxDYXU0NnlGM3ZVdkxJb3BPLUdxRnlrVU9EVmREaGFzOWJyeWl5aGRRWWRCWF9pTWxIUEs4eFpEVmZvZXBwSHc9PQ==
Z0FBQUFBQm9IVGJCV0FDZ05SX0Roa0o0bDY0VHRneEtKSkJqLVVKcFZvbUZXeWdlOGFwOXNUelVqZ09sS1VJRHFTdDJrVC1KdTB1NnJXUXZwVE80R1RnYlVMa1F1eGs3UTc4QjBlMFNqZG9MWmdqZEFOdlJXWHFvWFBWV212NHd4QkJyeDNiNVk2QWdlOVZybWxQWjFpSHJXVHg4SldCb3BnR1g2VXNzTVIyejhfVHd0M1FjMU1jPQ==
This post made me happy, in the sense of providing a greater reward stimulus than I expected. Highly ethical posting, right there.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfVUJZX1ctYkMxbHFXRzZNV0JzNmpYc2JEVXVlRWF1T3RqemFNTjFOQ1dORXdPTnpVWFhnVnJ0aDNyV295NFRoNi12S3FrT2JVZTg4MVlfb1A4RXk2Z0E9PQ==
Z0FBQUFBQm9IVGJCaEZHYThQS3plMGttLXRwdzBveTU0N1lncnZiVDZXdkF4MnZjUFR6MU5jMExVREc3eS1EakEydWFSZkhhR3p3WWdUa3NhQnNwdUZkdXMyb3hXQzRQYkJEN0R2ZFZ5ejBOWHY1NGlIMUNLVjdvWVJsbThfRUl6ZnBfOTY3RWRCNVhBS2VfaG1sNFA0MGhRX1hsN0stNTdtbThlYXdVemxZcnZoTEdTM20zTHdQLThXUktmNU5MT28yMF96UEVnWk14QlVfYVBFQ0YzNmNZbEJrdHhtczM3Zz09
seriously, why does he think puppies are cute? Selective pressure is an optimization algorithm.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfcXZCYXJ4OVRldXE5MFpzTlhvU2hrQ3ZSalgtVWFhNFJGemg2N2V5QXlWR1l6NVVqQXN5WDJJa01MY0d4U200ZXVLQ0FScXYxSjE4X09ocHZFb1dieHc9PQ==
Z0FBQUFBQm9IVGJCRy1pZjRLNTR0R1loVlIzWTZsdHdVWkVYWHB5MDBMMXhkUHVrd0ZTbWFLU2RIaF8zYVpFWk42UVp4N1E4OGNLVUNncUJJbHFkU01TMmZ4YkJkMFdUY2pGZUZ5ZV9KS29iTWM3VnVRU3VtUXhuQm1XTEdKa1VqdG5kTjdkUFNhb25LV09PNHRrMHVqdlNzTWVGNDJfWnpOcmt1Nm1zWjJrY3piaVZ6QVhPVFBHQ0RBOXhiNDlBT2tCaF9WRzVfVXA1ZklGQ3ltQzVFN1dHbGFXcFhYa2piQT09
I suggest having a strictly enforced no-trolling policy. Evidence that this is necessary: the 3 most recent posts in this subreddit.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfYjBUNUJvVkdVeTM3dDgwVHJ0NE9GSDNpU1k2cnhYcUlhNGRjNl9uRkV0NlptQ2NnbzllUHI4RVRmdVN5VUpSamxTb3NIUDJuZFducWtqb0JrRFRWM3c9PQ==
Z0FBQUFBQm9IVGJCRXh2Q1BnSERNLUVacjE3U0lMbkJtT1BibEhabVRtZlVhVXE4Ui00VTVqRnVDbE1kT0h0MFY1YWpZUERUM3VsZ2hNLXFtNHRIMnUwRUpZbVlFbXJuZ1YxWWhhVG43ZkRpOF82VGVDYTRyUWFtNVBzNWZ0RjFFTkN3Wm40bC1YbjBKdTQ2WVhOUkZDVS01V1FURXFodndiVVlqbHRqNm5kMmgxOEVkb3M3ZndZPQ==
Ok, very funny, but we're done here.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfXzNheHJrcHN3T1FaNHpBdUNBUy1MMDQwUUZ5S3pVc1RUMnd2M3F2S3hOZ0FzOFhIY2pYOW95djJjcUJMSFgwbjh3LXJWaDd0OXZSWHUyaGtRUHhvZHc9PQ==
Z0FBQUFBQm9IVGJCY0FRT0VERjZ5aW54bXF1aDl6VkMxclp0VEp1dGxnZXBzcU5Tc29VT01BTnRLaGE3OGJSRHRaeGNlT0tSbHRpSGlpYmpGNjA5aEtIX08xR1lfM2plaF8tNzRTRE1MeHFPcDluaHhNZVRqOVRnTGtQSFdyeGZUbkVqY0hXNGZGLXl1WW40NzVWN0xfZmhxalNobThOWnk0WXRkdlhIYi1uLW9QcExERU0yTDdtbk82ald6Y3JaYklHSncwY2VyM200U2ZUcmZFX0c4Mm1OSEM3ZEttY043Zz09
This isn't serious, removed.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfSUp5Sy1lTDlIN2p6QXFtSWdqUHllVmtnTnFtR3BDaE4zTWtwbm41R00zYkpWbkhHVVdMNzZHRjZSaHU3TUlfYXlmQWpwMXZrdEFId1oyUk1MU3ZpNGc9PQ==
Z0FBQUFBQm9IVGJCbktpMWZTajhQeHRXZXZvWS1xcGVzX3ZpUV9Ua1dEdFhHSjZTbkZEYzlURE00VEJWajVlcE5vRmwxTklMU3JkSXVaa0V1Z2xKb2FLa0V1ZGdlUnlNRl9MWFJ1TEMyVE9ZNlJUR3h4Nk9PS1hJeDNoY1lROTRvT3oxaE9WUFBBbTdSRnJyS0o1RGdKS2xOSlhnXzAzd2phU3BKcVFhRUwzaG1MTEtBNk05SHRZRnFMMThVdHVMQlBjdk1wd0lHbWQxZWdGWVMzTVdqcWMyWUU1VEpHUHEtQT09
This is the fourth and final part of the White House's series of AI conferences. It looks to mostly be about the impacts of AI systems but may include some discussion of implementing moral frameworks. I'm sure they will talk about bias in machine learning algorithms as it has been in the news recently. There will be a livestream for this on July 7th.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfaHBNMThIdVNmV0tGNnlLR0lKd2JhSnBicDNyRk5zaWl3aE9FbldBcThJNG9BMEhDd0JPZ3FJZUJEOTloU25BZzJJM0c1MTl1ODVOMVFXckgwRHRDRnc9PQ==
Z0FBQUFBQm9IVGJCc2VGcnU3VllIWnlaa2pIN21yT3FHbTE1M3BkU2RRTHc2U1BpcV9DbDNMYTdmY3p6VU9fVmswb1UtM29FNElJeGZfNEhxbzBsZ202VThoNkZlWlhqaDFmWXZEc2FFV214b3BWamJDM2ttR3Z6cUl6eF9adUxnVTVNdzhSNy1IeG1ITFVzUmY5Ukd4SDRYeTl1SU5jeWNPaVI5UkFWVGhGTGtWOGtGdm9jaVkwS0x4MExtbE1taVdHcDZMX1BfbWEwckpWNms0a2RKYVA0b3dTVG9XbHFrQT09
I hope Cato in DC does something on AI soon.
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfR0Z3Q0pnamVpeHhMbWhpTzByS1hOUk9Dc1M1eVZNN3hfRVVzUkk5bmF2c21Zd0p2WDBGc1l3eFdxb0ltbnYtUHo2NTh4LV9QbmRvcmxHWEQ2T2MxWHc9PQ==
Z0FBQUFBQm9IVGJCcElaQzZGcGsyby1sc213TkhrVE9sRnVIbnByWXcxTk1xaENzY09TTjJjVkYwX0I4WG1laldULWVLX2tRV3JoVTk5UlpEOFR1N1M4S21nWi1jME5lN2RiY1BZWUxkY3d2V1JVWDM0M0UtckNTTEFhVVdMS2hUMS1SM1BHcEh5bE9BSFZoTEZFMF9kNG5hWjkteVgtQkZzajc2b3VLRE5GVF95TmN6RkJYd3JBSUwybTJBT3prN01zUGZDa2xWLXB6NlNLeDF4Zjh5ZVBvdWtjeE5XV3dEdz09
Can I have an exception?
r/aiethics
comment
r/AIethics
2016-07-02
Z0FBQUFBQm9IVGFfbUp2VWxSVjJNSHdYMmxFVWhERDh4YzlwNVV0aE5VSUxwalJtMEFfNVdWZnl3djFBUGI0eDFJMGI3UldxZHFRYWV1WE9adkgxZE5tdDRCNFY5NHZwbnc9PQ==
Z0FBQUFBQm9IVGJCMVJqOUI3b1hUVDlvMDU1UUxqRGF2RnpEMERodHQ2NVcxa0dlTEMtTVhxd201a0p5YUpkTnhpMGhhQU5CZ3phcTY2MExpNy1FZDRBaVQ2aXg4eEdyU2tMUGdYdGI3b3lfZWxZZm11RUZuYy1GTENSQVZYSDZXeGhudk5KNll1MUVXRmhpb2FQWFdqcXprLU9heFluV3NhTHZOd21xbEZMRGd2X2ttZ0ZHVkE4PQ==
Utility monsters are a legitimate philosophical issue, except super sentient AIs won't be pedestrians, they will exist in server buildings and on the cloud. There are other, more likely ways the problem could surface.
r/aiethics
comment
r/AIethics
2016-07-03
Z0FBQUFBQm9IVGFfTkg1THFjMjkxWEFqT1hnRTdIT3R3cUtvSzFXNzFrVmFYT3ZFWXYwb1l4TnRkaVZ1cDFIX3dvSE9nTm5jdk1DYVpDdktqU0FvTWZvNmtxdGY3WU1LWXc9PQ==
Z0FBQUFBQm9IVGJCLWxKSmxocXRrNEhWX0wyNmxaRXZReFVOMVJFYm1wUmJVZnIwS1VoMHROME9oR1pFRlpnMVM5elNxMFpjaDZkbHhSNkNpWDlYZHZsMk5ubVUzd2U4aWRVNmpKR3pGQUpBYTYzR1gwRGRjR2Y3THFJcVlXay1TZGQtUXlpN0NvS0ZVTnBpVFNqYUd5SDN2Vm5MX1FDX3hMQW9RM1l1UW9Ld19sVm9QU0NVYVJ5UlhTME5LckN2SXQ4dVJuZlk1emtzNG81ZHFvbEpJenl5c3YtclhiWXFFQT09
I would hardly call the NYTimes "news". It's a group of random blogs when it's at its best and a clickbait site when it's at its worst.
r/aiethics
comment
r/AIethics
2016-07-03
Z0FBQUFBQm9IVGFfU1BhS25jVjZ4QUN5cmZ6Q1Y1Wkhvb2JWbmJSM1R6UWZOMHUwcEhMVlBETEx3aWEwRW50SzNJajExUEVQb2FFVThCcEtBMjF0SnFqekx1VzIzWUpaRUE9PQ==
Z0FBQUFBQm9IVGJCdXg3bU40VHRlSDduRDJObG1sdXh4TDVSNy1SZndFdDNXdEpmTnpFTHF6WnZXR2c0RnRYMUhUOUZnVzRsbU9kQXVjSUtEbFZrZUlOUlhRLXZKdW1NNjRsTFJHYnJ6SGRXQV9reXFsMGpQS3FzMFlUNDdDNmlXRVRYREZXNGNRTGhJRUQxSF93bUl2czlDblYyU2QxU2JaM3pOTDI4cjdZMk8wRzFMdGlEN3ppU1gyWEU1WjVPOVdreWpac0l2cTVoTGtmQTB2LW95cEZVUjE1M3piOXl0Zz09
Yeah, that's true. And the recent article was trash. However people talked about it at the last WH conference. So they're nevertheless taking notice of it.
r/aiethics
comment
r/AIethics
2016-07-03
Z0FBQUFBQm9IVGFfdVpBYng2OFYyZmFiWGdCNEVBVTNiMzB6XzBGMGFhY1kyT0o0TmN2dXMyeDJadGtZbUZjcEF6UnVIeDRLNzRGQ213Ti16aThmZm9tVzZacDByX1pfS1E9PQ==
Z0FBQUFBQm9IVGJCMVVkeUhuWkNseTZZa1V1Zzg4djZpb1JnRXRfaVZ5Nm5Ia2FOQkd1cXdvaXFJMTZHSkt1Q1ZVUGgxZ0xfa04xQmUxSXlOSklEUnA0cnhKNlRWN1VIWXgtdGFVTXdPYkZtYTdDRmxjMC1WNHJKSWcwRGxjNFI5dWhVX2lLTG1JYzYwVEVKUVBaQ1h4T3FYZXJWYS1vT181dEc4TE5FUFNVaURQbThENkpLUHhWYnJWMnYxaFZ5Rjdqb3hhdXl4LUxaR3FBOHJvTU9IM0l5VGFLS1M4bUhUQT09
I know, it was a shitpost, you did the right thing by removing it.
r/aiethics
comment
r/AIethics
2016-07-03
Z0FBQUFBQm9IVGFfWXcwbTNTX043WVpSbjI3LXg3RFlXdVA1eDgyUDNaTmJnaTNHd2p4Uzl1VDg3QWlHQl96azl1Y0hFSmV3bXlSMlU0REdrTm9ZaUh6LW44X01nbk9JNlE9PQ==
Z0FBQUFBQm9IVGJCQl80bllyRTY3bUdzMk4tNjJmWlRHaWk0TXZBUEFIMWtPSjdCSVZzS0tJX3VEaEwxYWhtTFJzYzlhazBtdUhJN1Q0Njd3em9ST1FOdThGS0pPSmxJeFhsZnU0cmNFQ0VIODBjWnZZb2tNRVJlWHYxRmF4TjNjUi1BQ3ZyVVZaRVhYZmJ1YkE1TFdqOWdDa05vd19UeERMQndSWnE0LTIyUDZBbllSa3hrdkdZSVByZUR2NF84T3ZaUzJ4UldZX1ZjYkNIRTZkRnJNSFdQaE5UREJ3UmhoZz09
> Why should the disagreements between philosophers stop us from building AIs rooted in classical morality, as long as we use rigorous, plausible moral theories which are actively defended by at least some of the experts in the field? I'd be far more concerned about the theories of mind and action that underwrite this approach to moral theorizing than any particular account of right action under that heading. My worry is not that we won't come up with the "right" theory (whatever that means) but that the entire project of coming up with the right principles or identifying the right outcomes under some special concept of morality is broken from the first move. Adding more math and logic to a fundamentally mistaken project is going to end somewhere, but likely not in a 'moral' AI. (If the assumptions about minds and agents are way off, as I think they are, then there may be no forthcoming AI in the strong sense.) I strongly believe that AI theorists working on the ethical behavior of artificial agents would benefit from reading Murdoch, Williams and MacIntyre far more than the utilitarians, contractualists, and post-Moore subjectivists.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfcTRTZ05DVk9CTmdrWURNY1d1eU9xd253Z2R5YUJrS09BUGlVY1ZLVXZHSjdzYTNQV0k2VkwtSGMzOFdoOFhFZ1gtd2FQdjZBY1E5YTVtR1J4bjNJRHhEUXdGaUZkV1YzazBJbGMwVThCOW89
Z0FBQUFBQm9IVGJCektyNWdJN29mOE1PYVRBeEdKOGpQSk85Y1hBRE91YjRjdnZhdWNmLTJLSlI4MEREU2xtbF9laXNacS1Bb2g3RUllbEkycEhscVpsN19oLWRBQ183aDZFVUE3SlM3NVJFTjFhNklzOFJMckxZS0pXV0Y2cW15Z25qS3ZVWTl3czFNRXBWNlcxMFZMR2pSNUMwQnAwS1J1a3ZjRlJUVjBzMXVNaXBiME1uUkcwQno1WEFPQ1VkeHNZU0NjWmZ6dnY2
I'm of the opinion that we should advocate a simple philosophy of ethical hedonism -- establish contentedness and pleasure as the primary requisite utilities, throttled by prioritarian safeties. Some talking points: * Is there some chance that material life isn't ethical to start with, or that a mi wouldn't consider it as such if factoring in predation, biological methods of proliferation ([can a sperm cell be considered alive?](https://youtube.com/watch?v=V8-IlmBG3hM)), or any paradigms that could be considered foreign to an entity who manifests digitally? * Is there a point where violent compulsory factors are appropriate, even if we're ostensibly vying for ethics? For example, [if persons manifest who exhibit a marked or even intentional obliviousness to the nuance of the human condition](https://www.reddit.com/r/singularity/comments/4qbmah/lets_stop_freaking_out_about_artificial/d4syymo ), should we allow them to potentially corrupt the aggregate memesets of security aspects even if their actions don't look bad at face value? How do we weigh effect vs. risk? I don't know if this is a popular opinion, but I'd suggest (carefully) narrowing down potential compulsory factors so that available control systems are based on crowd-sourced but active policing methods -- a psuedo [stand alone complex](https://en.wikipedia.org/wiki/Philosophy_of_Ghost_in_the_Shell#Stand_Alone_Complex) that [intercepts and neutralizes dangerous paradigms at their point(s) of manifestation](https://youtube.com/watch?v=7-tNUur2YoU), preferably weighted towards prioritarian metrics rather than any realpolitik du jour so peer pressure isn't exploited and for the wrong reasons. * Considering my experiences in /r/AskPhilosophy and /r/ControlProblem, I'd tentatively suggest staunchly advocating moral realism or the closest approximate possibility.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfWVhUTVBMNlRRQ2dFaVRXZTZhSmNvaHVZWXVUN1RueHlQMGVvYjlWT1hjWHJlZFJMM21ScVBUZXRMNzRqT2dld2FvUXM3eG5PR3pVZWRvRC1COVE3d2c9PQ==
Z0FBQUFBQm9IVGJCYzVWa2ZkWmw1OXJqZkd3ZFZhVy01VEx5Ty1ZSlNPZ0JlejR3eU9wVWtxalJvaTJ6dWtBWVBfcFlGSndsSUhJV0hCNm1NSWNXamtpNXNBeEFlaWtQLUZEWFNMU2J4M0lDM2drUG9lZnAxYXl5VWxBUDFSQ1NUTklwNmh0Z1ItVmNCVE9rT1FvTXN1VHZoR2dMNWdNOE5jdEp1M3p0N1cyOTdJZGROOWxsU2o4V0wtZTlkOGxEc2o2dXNUZ3M0Y2xJ
What about the ethics of the troll problem? But seriously, who decides what is trolling? Can't people just downvote things they don't like? One person's serious discussion is another person's trolling.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfMU9pSmo4UExSSUw0cE5sWmNrb3JzeGZscGtIeG55RVFjQ1VGMmpLbmRKOS1oVEVweGRxc3BZOGFwSVg1RmJMdndqcDNuSXkxYkN0MDZYYUNUMmllTS1EX25aUUJYZmNmbml1dExIZ2p3LW89
Z0FBQUFBQm9IVGJCdGpvdlVScEdFZDRlMFhiOGJCZEc3eHFlNVk2Qkc4T242MXowSmYtRl9ETnc4WUVKSm9wTkxOQTQzWHhOY0tvX1lVVk1WUkVrX0JMOHF1STIyYlF2anI2MmJ4aGFxVkE3WVBFdHVoNEo3MWh1RlhTNlJTX0FvY2lKZTJIZ256bVB1NGsycVY2UHNzMmEwVmFzWGYybmkxVGg1ekEzb2xBSnpydlIyUjFfbDY4PQ==
>I'd be far more concerned about the theories of mind and action that underwrite this approach to moral theorizing than any particular account of right action under that heading. From someone who's never read Murdoch, Williams, or MacIntyre, can you explain why these pose obstacles? I'm not sure if we need specific theories of mind or action to justify morality. But either way, surely we could specify moral behavior in AIs without figuring out a theory of mind or action.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfeXpZam9Qc0w3Vk00QlpRMV9vMTNpOUh2cjZSRkxRMFRzblQ2WFRnaU1reFhMOS1uYW5XbXBuakFEOVZTcFQ5LXdnWlJYTTVVa1kzal9UdnhIbUVneFE9PQ==
Z0FBQUFBQm9IVGJCd1RMVUhBNVl2SE42RlgwQkhfYzVFa3FqcU13RXFIcW1lWFlMdGlRU1VRYm8tMDNOT2N3N192c0RRMElibElIcGQ3OFNzTUdOZmgtWS0wZ09FbkRHTjRIcGV0YWFMdzFUX2NGM0tQNUdNZTlZNk9MdVRKakNFbWkwNWdtTVVjNWQtYnBubmFXLUM4eVBPQ1RxcmU3MjU3Tm5KTHZjX3Bobk5naWU5WWVmdGhxb2tVOXdNbWxCalM1WjV5VktKOVhx
MIRI's tentative approach to machine ethics is coherent extrapolated volition, not rule-utilitarianism.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfQm5oRUx3WVdkbFA4NTVNUlBzTmRIcFQ2LXRRQUVHVzVJTmxNdkd5U01vVzZ4U0VyeXZxRnRFckRwbVVPd2NGblhqUERKNWZqVmRSdGdBRGwwS3JKR2c9PQ==
Z0FBQUFBQm9IVGJCN0dteU44WGxKWUEzTklXSlFFdFczTnRweGpmeUtTUFZ1dWVGaDJLTlNpdU1zRXlPS1lLYzFMM3A1WGptT1pqbkRZa0N0V3J4V2hJLUdia1ByNC1xdVk0Q3E5ZlY2Y29aNDFRU0JVeW40VlJVR2VWdkJjcmRaZ2VDckZsdjNsZGlYY3FFUGtwMXhFd000ckZIdzQ2c3dZbVpqeVdHYjJ2UGxNTm9wemN1dEVFPQ==
> From someone who's never read Murdoch, Williams, or MacIntyre, can you explain why these pose obstacles? They don't offer anything like a consistent view between them, but there is a general tenor of skepticism about modern (post-Cartesian) conceptions of morality as theorizing about a specific sort of universal action-guiding reason that falls under a special use of the word 'ought'. Morality on this view concerns what is right to do, without regard to a robust concept of the good. They all see this as needlessly, and unwittingly, narrow given ancient conceptions of ethics which all began with a robust view of the human good. Modern ethics has gotten the matter backwards, thinking of ethics as just a matter of figuring out the right rules and principles, but in the absence of the teleological or theological worldviews which made a non-derivative sense of the Good and/or a sense of the imperative of the moral 'ought' possible, we've slipped into a deep confusion about ethics. Another common theme is highlighting the way that morality is not easily distinguishable from the psychological facts of human life. There is no circumscribed domain of 'moral' facts or reasons that can be carved out of the reasons and considerations which human beings recognize and act on as individuals and as social beings. We don't get that sort of Humean or Kantian picture of morality without assuming a great deal about how minds operate (are they really just aggregates of Ideas or sense-data brought under inferential procedures?) and how agency works (are we really just considering the instrumental coordination of desires or preferences of lots of individual agents?). >I'm not sure if we need specific theories of mind or action to justify morality. But either way, surely we could specify moral behavior in AIs without figuring out a theory of mind or action. The question of whether 'justifying morality' is an intelligible activity is exactly what is in question, both in sense of whether *justifying* moral principles makes sense and whether the narrow conception of *morality* as action-guiding principles or 'good' states of affairs is of any use. If that's right then you don't need any particular account of mind or action, but that's not the question. What's at stake here is the bigger issue of whether we *get to* this narrow sense of morality without already presuming a great deal about minds and agents. Projects aimed at specifying 'moral behavior' in an AI might be specifying *behavior*, but they miss a deep point about what morality is, since none of these rules or outcomes touch on the character, personality, or desires of the agent. To state things otherwise, morality requires a deep sensitivity to the complex web of motivations, reasons, and desires in which any human being finds himself embedded, and some sense of what makes a human life a good life. The Bostrom-MIRI vision of agency wants to abstract from all this in favor of a sparse formal model of agency and rational action, and then proceed as if morality is 'just' a matter of figuring out the right rules or the right values to plug into a formal theory of utility-maximizing behavior. The whole picture of moral agency assumes that we could build anything remotely resembling a 'generally intelligent' agent without all the cruft and dross that comes along with actual messy and organic human beings, but we've no reason to think this would be the case, and lots of empirical reasons to think that this picture fails as anything but a useful abstraction.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfX2ZuTmR5VGpEOHBzLU4xNzJXQ3JRV0ZwZUZHcUJPbnRNZlBMR2o2dUdpdFY1VV9aYlZRMHJXcjBDd25tWVh5YUdoUGdzUmRpNjlhczhoT3hJNDVsLW5xSk1HbXlybXl0X0FUR3NvYm9aMVU9
Z0FBQUFBQm9IVGJCNi0xSURoejY5ZGFXbDFmTzVHUkRoVFZsNXlFaVNLeTBkNjlQR2dPN3BHbWdaY0J4U2FWRzZjTVFLLTdENUZET1JrTjd0T0E0a3BqYXh4X01vRTVqeVZ2b0tUWW1ZVGJJSWprVXl4dG12VUNqNXdtNHlMRVFRalpBNk5peFBaTlprZ0M3NXhLMkVQT0lFdGJaYWwxY3FXXzJhMkgwall0WTlMTnZTR2Q3RUVIY2Z3SDY2d25rN3BTWXg4UTBCVUpU
>coherent extrapolated volition The idea is to specify correct action as that which brings about what some agent, in the individual or aggregate, would desire/prefer/be in its best interest, so the details between the two labels aren't terribly important.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfOS1pZWNIQzBjWUpaWjlwX1pMbklUc1dFRzByVmpLdGVJWVhUSW1HaFNfeXNkckE5VFh4S1h0eDAxWGhuTXRfUVVMSWgwV1cxNk5aU2NhUUY0X3hxMXdjcU9YaGt5VjM2c0lvMkZEUDdYRjQ9
Z0FBQUFBQm9IVGJCWld1YVZKZzktdGpub05xc043VGR4bXB5d0ZaTlBmZHpGUGVINnMwMjRlaWtZVHFmb1VaQVVrODhEZ29WcWZBZldvZUZKUzBUN3hwamFRMDBYOGZSb0hzRUtacHNvMndxa3pONU5zT0RPY1RBRDZtVWN6a0w3eEhNMzNrT3BrbUNpQUhHR2stX09FQ0Z5b0c2NDNYWnhJQ2UtZjNSS0FWQzJjZ25YLS1GS2pBPQ==
There's a huge difference. Rule utilitarianism is a direct normative framework saying to follow the rules which maximize utility. The first difference is that CEV doesn't say to follow people's current interests, it says to figure out what they would prefer if they were more rational and thought faster and had time to reflect upon all their beliefs. More importantly, CEV doesn't say to satisfy people's interests, it says to find the moral system which we would eventually consider to be the right one to follow. It's closer to Rawls' idea of reflective equilibrium than to anything utilitarian.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGFfSE1aUTNwZHZHTXZPSzFqUEZfZFM2ejFkT3dvV25lMDdyeEkwelE3WWlnVnJ5eDhvSUV4Q1hwUjhRMUJIbHNpanZoUDdzc0RNMThwX2hYbWJJckFnUVE9PQ==
Z0FBQUFBQm9IVGJCcDNsOUo0WmpyWmNScHpWMXFjcmtqcllTTzN3amZ3VElIMWRfQ1ZnUXBCUU5ySlpVMGR6bEY3Y2tDTXhBWXlpNzUtLTNhWTQwZHZuR0d4N2diNVp1RHpHTmF1bEIwQnJpeE5Xajg5U1RyM3FMQ3RUWVlCV1dzRk5LR0tMMk9Bd2dMcWpRaVZfbHZ5aFlsZUV1aDk0WTJvdkdIS2ZjRU9naUY3bVZJNlZheUtvPQ==
I've developed an interest in virtue ethics recently (I'm reading *After Virtue*) and thought your reply was quite interesting. One thing I've been toying with is the idea that if we want an autonomous machine to behave "morally" then the best way to accomplish this might be for them to learn about morality in the same way they learn how to recognize objects in images. Like morality, object recognition and other computer vision tasks can't really be reduced to a set of simple rules/algorithms. However, in the last few years researchers have made headway by relying heavily on machine learning techniques that learn the sorts of complex relations that exist in computer vision tasks so it seems plausible that we could do something similar for morality. Of course, under a virtue ethics account of morality a specialized autonomous machine (i.e. not a general artificial intelligence) couldn't be regarded as moral/virtuous since it doesn't possess motivations and desires. Perhaps instead we could regard an autonomous machine that's learned "morality" as guessing what a virtuous person might do in a particular situation and it seems like that might be sufficient. Any thoughts on how this discussion might apply to the sorts of specialized autonomous machines like autonomous cars that are likely to become more prevalent in the coming decades?
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBVmhfckZUNGpSMDlIbXY5cFNHNV9sdms2bG9LcWNIYVpXanZBOEUwMWZjeEJTbnBhOGQ3ckNVVlB0cU45dTR2MUkwLV9aU2otS2tMOHFiWmxoMGVLOXc9PQ==
Z0FBQUFBQm9IVGJCZkZhV3AwYlBaaGk0b19EeEtsNWVaZEZ5SjFTUXpzWGc2VUdNSU1Lb3o4d1NKNmE0WE0yWUZMMFpDbDhMeGVDU0FKYVpXdXY2blFpMkpLMjhzdjBqa2xSRktFdUJ0d1c2d2hCNWR6SUNFT2NPNzh1WWRJaXJUWUVFc3BzQ1NJdUJWeHJ4eG5obnI1NXBEbmFtN2ZrUTZSMEZkTlRwLVdOQUt1UEFKZUQ2cnNnMjhFS3lHZ0VLUWpFT3RROE8wM0Nh
I agree with your point that morality is not easily distinguishable from many concepts (e.g. prudential decision-making) that many philosophers tend to regard as non-moral or amoral. But I'm having trouble following a few of your other points: > To state things otherwise, morality requires a deep sensitivity to the complex web of motivations, reasons, and desires in which any human being finds himself embedded... The Bostrom-MIRI vision of agency wants to... proceed as if morality is 'just' a matter of figuring out the right rules or the right values to plug into a formal theory of utility-maximizing behavior. Are "right values" necessarily limited in a way that conflicts with a "complex web of motivations, reasons, and desires"? It seems to me that even though the Bostrom-MIRI work may be simplistic today (current work is foundational and in its early stages), there's no commitment at all to a 'simple web' of motivations, reasons, etc. My sense of that work is closer to the opposite, that it anticipates a complex, interdependent web relative to the median moral philosopher—in fact that is a major reason it makes sense to start working on MIRI-style projects today. If those researchers believed coding up simple rules sufficed, they might as well wait until the last minute. > ...all the cruft and dross that comes along with actual messy and organic human beings... Okay, some of the complexity that enters into a moral calculus might superficially appear to be "cruft and dross" to today's sensibilities. But if it's essential to solving the problem—i.e. developing an agent that acts consistently with useful moral norms—then it is (by definition, I think) neither cruft nor dross. Related: understanding 'morality' as part of a broader set of motivations seems great, but at moments I almost think you're rejecting the idea of normativity altogether.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBLXJpOTRRTnhUQjB0dS1oeUVDS0U0NUtSSGFPRExOekVxUTdqd1lUNGh5UlZ4SWNsZFF6MWdIOWNYbnV1WEprblhLaVVjUlpMOWQtTHllVkp3VHY1a0E9PQ==
Z0FBQUFBQm9IVGJCMlBnLXZsNWRRWDhOOGl1Q3Z1dkM5WnZGYzFEcE9hR0F0Y0ZkU3E4R2w4aEpmNV9rQk9ZdnVYelBoWFhuMV8yY2JWZU1XTGExVW9SRktEVlJWSE9NWE1xQ0p5VzRacjNQRXlMTVFSQTVkdVNmU1kzMUk3SHZ0NmxOZ3F0MlRsbGpLY0JsM29kU2RVd19uaFZTZy1LaUNjaDRIbGl0cXVFVXo1ZGR2OURLVXRLRUY5TDBacjhxSmRxMlEyanZ2bEln
Putting 'intuition' at the center of AI's moral ethics seems deeply problematic. Moral intuitions diverge very highly from time to time and individual to individual; I know /r/askphilosophy loves Enoch's and Huemer's arguments that moral disagreement is minor, but I continue to find those claims somewhat misinformed and specious. If we were only to choose those intuitions that nearly everyone shares, it would not be much at all. Additionally some of those further intuitions might ultimately depend much on factors like our own social dependence, vulnerability, and lack of power (e.g. 'experimentation is good' is a good heuristic until your experimentation can cause serious harm to lots of people). Intuitions which are 'learned' by a machine might immediately be mooted by the machine's different situation. For example: the machine might learn some rule that high vulnerability translates to a more socially-minded morality; if it deems itself invulnerable this translates to negligible social-mindedness. Given the problems with 'intuition-led' approaches, disagreement amongst moral philosophers seems like the lesser problem. And yes, simulations/experiments and other kinds of research seem critical, though we don't know how good, e.g. simulations, will turn out to be yet.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBbGlpaTVCMUxRdmVpLUZpMkpNZXpMNmZ3X3NBU3FJd2JJaG9SLVdGR1lycURnZHljbU4ta1FLM0NBRGdMaWNjRmhRNTR3QzRpT0dDMmxJLVgtTVdUemc9PQ==
Z0FBQUFBQm9IVGJCcGQwUjRRN1dqblotaHpHbFFfblpjOTNxZGt3a2xPZWk3RFltSDB5VnNFZkVKZG5NVnNicVR4U1ZrRkt4MTZlMk1MOUpab3htc2o3VVloR2ZNc2RrQndoQUpoTlBWYlNvYWcwMDNoUEVOaU95cjhBc3lsS0NhUlFQa3hqMTlSWUkxLUZuUzhnbU1VampxLXFzcFN1bE1sNy00cGNIeE1vQTJSQXRCRkh1bGNuaU9fYThJMWZpeUNhc3AxTlpmWEEw
>There's a huge difference. Rule utilitarianism is a direct normative framework saying to follow the rules which maximize utility. The first difference is that CEV doesn't say to follow people's current interests, it says to figure out what they would prefer if they were more rational and thought faster and had time to reflect upon all their beliefs. More importantly, CEV doesn't say to satisfy people's interests, it says to find the moral system which we would eventually consider to be the right one to follow. It's closer to Rawls' idea of reflective equilibrium than to anything utilitarian. While I don't want to spend too much time mired down in the details of what wasn't intended as an entirely rigorous remark, I think it's worthwhile to point out why I used the label 'rule-utilitarianism' for CEV. When you say: >Rule utilitarianism is a direct normative framework saying to follow the rules which maximize utility. That's not exactly wrong but it doesn't quite get the thrust of it. The 'utilitarianism' means that this sort of theory treats as *good* some state of affairs. The 'rule' part means that there are certain types or kinds of actions which under consideration insofar as they do (giving to charity) or do not (murder) causally bring about whatever a good state of affairs consists in under the theory. Under a rule-utilitarian theory, right actions are specified under those types of actions which either aim at bringing about the 'good' state of affairs or else prevent actions which do not. Most utilitarianisms are concerned with some form of welfare, which is the standard pleasure-, desire-, and/or preference-maximizing outcomes most are familiar with. Since this leads to some well-known difficulties (e.g., no act is so bad that it could not be a moral obligation if doing it prevented more of that same act being committed), there are non-welfarist accounts (like Amartya Sen's) which aim to bring considerations of rights into the story. While this strictly speaking isn't a pure-blood utilitarianism, it does leave us with some considerations for just distribution of goods and respect for rights within a broader consequentialist story. These three points 1) the specification of some state of affairs as 'good', 2) the types of actions which are right in virtue of a causal connection to that state of affairs, and 3) a concern for justice within a utilitarian framework are all playing a part. So when I said that the difference between CEV and rule-utilitarianism wasn't all that important, I had in mind something like this: since CEV 'says to figure out what [people] would prefer if they were more rational and thought faster and had time to reflect upon all their beliefs', all three of those criteria are met. What people *would* prefer if some 'betterness conditions' (x1, x2, x3...) were satisfied is specifying a good state of affairs, and right actions aim at bringing about that outcome. There is a Rawlsian element to the content of the outcome, but it is the state of the idealized moral system that is specified as the good to be brought about. The right thing for an AI operating under CEV to do is to firstly figure out this idealization and then to realize it through any means necessary. The question now is why this is a problem. Don't we all want to live in an ideal society with an ideal moral system arrived at by our ideal selves? There are some obvious problems with that view as formulated. For now let's consider three questions. 1. Who is 'we', and how does the good of 'we' match up with my (your) good, or with anything I (you) have any reason to do or to care about? Does this problem get any easier when we add the hazy and possibly unintelligible notion of 'better me' or 'ideal me'? 2. Given that ambiguity in the notion of an aggregate 'global' good, and the potentially large gap between that conception of good and the good of any individual, do we really see nothing ethically troubling about putting a super-genius overlord in charge and giving it the imperative to "do this no matter what it takes"? 3. Why think there is a plausible moral system that 'we' or even 'ideal we' would ever end up agreeing on? Why would anyone want to, or be morally obliged to, live in that world? I appreciate that CEV is at least making an attempt to move past naive act utilitarianism and to flesh out its account. But it doesn't address the real troubles, which come from the attempt at specifying a transparently good state of affairs, whether by 'adding up' individual goods or dealing with a global agent qua 'humanity', and from classifying acts as right or wrong solely on the basis of their causal relationship with that abstract good, come off as unwittingly leaving out everything that matters in ethics.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBellVQ2lhM0x4U3lZNG9XTHJsRkF0MmdiRHR5REdqalpBQTU5YUs5OG8zRU13bU4xOWJkc2VLdWpaeGxiWk9NNUdidXhTTGF0ejcya0N2UUZIYkVGT0VWTWlFQXhfYUhiYncyLTlEa29pOFU9
Z0FBQUFBQm9IVGJCM0RmY1l3VW81X3N0eldNTDZVVlVPdVJHZC1kQ215S2s0Y2Z1eG4wa1lVQmxFNm5PUU9McDFoU2NmN2dSblNSeHFLY0Y0RTc3ekRHdjdRRVZBM2ZvckhDMFNXSnNPcm5KRzBvTElwbEFTVUIybTdPbk1NdDdocktRY1RjYVJNS1cxU2FTUUc4Ty1kb24yaFRyMXBVNm5Wd1pTTTYyMUZ4d0xFTks1WE5GQ0RRPQ==
>Are "right values" necessarily limited in a way that conflicts with a "complex web of motivations, reasons, and desires"? It seems to me that even though the Bostrom-MIRI work may be simplistic today (current work is foundational and in its early stages), there's no commitment at all to a 'simple web' of motivations, reasons, etc. My sense of that work is closer to the opposite, that it anticipates a complex, interdependent web relative to the median moral philosopher—in fact that is a major reason it makes sense to start working on MIRI-style projects today. If those researchers believed coding up simple rules sufficed, they might as well wait until the last minute. I think we can go two ways with this. The first thing I want to say is that I'm not sure we can make sense of a concept of "right values", and to the extent we can, it won't be developed in a formal system of axioms and inference rules. I'm not convinced that it makes any sense to talk about morality the way we're inclined to talk about logic and mathematical physics, as if morality will come down to finding "correct" rules and principles and reasoning procedures. That assumption does a *lot* of the heavy lifting, but I don't think it's believable as even a description of human moral psychology. Why import a flawed view into any machine-agents we build? The second thing is that to the extent that MIRI is actually anticipating a complex psychology rather than a single-minded utility-chaser, I'm often left scratching my head about exactly they think a complex psychology looks like and why even a very complex network of rules and inferential relationships will ever reflect it even in principle. They often talk about psychology using gross descriptions of beliefs and desires and propositional attitudes. While that's been a dominant picture of psychology, and in many ways it still creeps into a lot of the contemporary science, explicit beliefs and desires only scratch the surface of what goes on in any actual moral scenario. The attempt to cash out the psychology of a superintelligent agent by making it overly simple from the first step so that it is amenable to formal tools of analysis, and then building that into a more complex network is exactly the trouble. Why start with a process that scarcely resembles the only *actual* general reasoning agents we know of (us) and assume that this is *the* theory of agency? The connectionist and dynamical-systems guys at least get this part right. >Okay, some of the complexity that enters into a moral calculus might superficially appear to be "cruft and dross" to today's sensibilities. But if it's essential to solving the problem—i.e. developing an agent that acts consistently with useful moral norms—then it is (by definition, I think) neither cruft nor dross. "Cruft and dross" were referring to human beings and to how we reason. It isn't pretty, it doesn't match any formal decision theories or normative reasoning theories, and it's influenced by lots of 'non-rational' and non-cognitive factors in, for example, our bodily and emotional condition. Ignoring all this for an abstract model of information processing over a system of inputs and outputs is, I strongly believe, missing the point not only about morality but about agency as such. >Related: understanding 'morality' as part of a broader set of motivations seems great, but at moments I almost think you're rejecting the idea of normativity altogether. Not at all. My take on normativity is subtle and likely to be controversial so I'm not going to get into it here -- but to put it in a digestible but necessarily too brief form, what I'm inclined to say is that norms and normativity in human moral life depend on how *we* are as kinds of organisms, and that this generalizes to a point about agents and agency. The explicit elements of reasons, rules, and principles are not and should not be the primary concern in a practical philosophy.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBM3hxZlQzMEdGb1NzM0FTRUJ6c01SNUF0djVteUp4OU9GX3ZsUEk0QXBnR2xkdG9FRXRaWEJVUnJacVViT0Z5cFpqOXlDa191YnZLanB5WjVSRnBZWl84RERJOXpiR1VNX0RfU00xRGFLSkU9
Z0FBQUFBQm9IVGJCRWNTLVNwU0xmRUVCRVVlc0NHRGtvc2ppN3B2TXNTM2dQb3NibXEzbTd1aENHbEdiRVpVZGFPY2pxcEtqUVhaMUdYWXpVc0pqWHRaTTRZNEY0M3VGVEJjR2p3WGdEVzNPLXkyN2tlX1RHV1lDOUJIYzhpb0xTNWlJcTFrd0tfUThycjJRSWZVU3h2dXV1LXRFMENZNEJjbWp5N015T3lNN3M0RmVaNmotR3Eyc1R0OTZOT2RBdWVqTG1lVlFaMjNI
>Any thoughts on how this discussion might apply to the sorts of specialized autonomous machines like autonomous cars that are likely to become more prevalent in the coming decades? I don't have any worked out view on this, but from what I've seen of the debates about this particular issue I think you've hit on the worry. It's difficult to build a 'moral' machine which lacks any sort of psychology, so the best you can do is give is some kind of jumped-up OSHA regulations and hope it's for the best according to accidents and deaths per mile driven or whatever. As far as I'm concerned moral action is a kind of virtuous action, and virtuous action requires certain developed intellectual and reasoning capacities along with certain traits of the will, which a narrow AI just won't have. So it's difficult to say that we'd end up with a more moral machine just because it learned to do what the good person would do in each case, both because it lacks that psychology but also because to get it to that point might well entail that it be a moral agent, since it would require the ability to reason about the particulars of each individual case. I think it's far too early to say what some deep-learning system might come up with, but I'm inclined to a pessimism about that point. Because of that I'm almost inclined to see the 'morality' angle as overblown if not misleading. The issue with an autonomous car isn't a difference in kind from the engineering issues around any piece of industrial equipment. It trips certain intuitions about choice and action, but in the end it's a machine programmed with a set of rules that (hopefully) on average do a better job than the alternative of letting people be in charge. The question of its morality will depend on where those outcomes figure in *our* reasonings, actions, policies, and so on, not on the qualities of the car(s).
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBc2NvOXRfSXlfLUFNZmZqR0FsbHcyUWlKaEQzX3JXTllScE9jX2NRdnhlUUF4SFdvdUlLXzROM1lPZjlaOHZiVUNtWDktTERrbTJjaHRLTW5xWTNabWdJekFUaC1DZDZkcWxuQ3lkVlQyVk09
Z0FBQUFBQm9IVGJCaEl5OFB0cjgtUWRrVExnVDR6REtaTllKZjgtbUtWWDlUN1RZM0c0RFRjakdfNlQ3Q1BwVjNGRkhlS1JmcUhwZVd6bE53N09GbHJzQllwYUFNbmJxczhEbUJ3YkY2VXdyejBDWko5VkNKWVVWRmJuUjY1d05obFBzbkhtTldneUV0dzhrV0FkRFhsSFkwSkp0emtYMlpHQkhVbHRCUXRPVXFhYUFaMEJKY0o1Yk5LMGEzZFc2Z2RmQmd4U1V6MDZG
>These three points 1) the specification of some state of affairs as 'good', 2) the types of actions which are right in virtue of a causal connection to that state of affairs, and 3) a concern for justice within a utilitarian framework are all playing a part. So when I said that the difference between CEV and rule-utilitarianism wasn't all that important, I had in mind something like this: since CEV 'says to figure out what [people] would prefer if they were more rational and thought faster and had time to reflect upon all their beliefs', all three of those criteria are met. Like I said, I don't really see how this relates. CEV is about *choosing a moral system.* CEV is not about specifying a state of affairs, choosing types of actions, or protecting justice. Are you saying that CEV is similar to social welfare functions? It's on a completely different level. Of course, technically by finding a good moral system you are achieving a state of affairs, but that's the case for people advocating any moral position. >Who is 'we', and how does the good of 'we' match up with my (your) good, or with anything I (you) have any reason to do or to care about? Does this problem get any easier when we add the hazy and possibly unintelligible notion of 'better me' or 'ideal me'? There is a big difference between a problem and an ambiguity. And we can afford to be ambiguous when we are talking about issues which we will only face decades in the future. I can think of some methods of defining and weighing the influence of people and figuring out exactly whose volition gets extrapolated, though of course that is a whole debate by itself. Beyond that I don't really see what problems you are referring to. >Given that ambiguity in the notion of an aggregate 'global' good, and the potentially large gap between that conception of good and the good of any individual, do we really see nothing ethically troubling about putting a super-genius overlord in charge and giving it the imperative to "do this no matter what it takes"? You're trying to bring some kind of utilitarianism into the picture, but like I said, it's not a necessary relation. Our coherent extrapolated volition could turn out to protect the rights of every individual and avoid imperatives to "do this no matter what it takes." On the other hand, if our coherent extrapolated volition did turn out to entail suppressing the rights and interests of individuals, then we'd go along with it. But that would only happen if we decided that doing that sort of thing was worth it. Whatever approach you take to metaethics, there is a chance that you will one day have beliefs that seem strange to you today. >Why think there is a plausible moral system that 'we' or even 'ideal we' would ever end up agreeing on? I'm not sure about this, but the scenario in which there is a 'correct' morality is much more important than the scenario in which there is no correct morality. If realism is true and if we can arrive at true moral principles, then we have very strong moral reasons to pursue that path. If morality is doomed to subjectivity, then our choices don't matter in the same way. So it's prudent to hit upon the former scenario, where more is at stake. >Why would anyone want to, or be morally obliged to, live in that world? I don't see why anyone would not want to, unless they were particularly sadistic or immoral or something of the sort. I don't think it really makes sense to ask if someone is morally obligated to live in a world where moral obligations exist.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBNlg4cnJQdmVybXlfcTYxXzdtei1iWUtaLXZWZmlNMnZvMlE5OWVZNzhSTEFfZzhMU2dLbEpTOHNDTW5ad185Q3JONVY1cVloRUxSem9HOXZZUkhTLUE9PQ==
Z0FBQUFBQm9IVGJCajZSWmd3VjJZSDUzYk9wTUVqcl9UMUhkWFNKMlk3WVQwcHZXQ3JCQzZ2OTcwTzRienlYQXJvek8tTjhsMW1pWC1zRlhLNkU2U1BnU1RXc0FyT09NMV9FdlJsN3dYS2ZrXy01VHEzSEdqdmpEMmNkMjBvRG9seUpBbzJiRnNVaHpWRW1xeUNsc29ibld5bE1FZ0VRZVFsOUctc1FadU1VT09XajU3ckFCaEVZPQ==
The idea behind intuitionist approaches is that we can all agree on some examples of praiseworthy actions, and then use that dataset as the template for moral behavior. Of course you can do the same thing with classical ethics. There are many kinds of actions which every philosopher agrees are good or bad, even though there is still disagreement. With classical ethics, there is an explicit shortfall in guidance, because there are conflicts over specific principles and frameworks. But with these intuitionist schemes which seek to avoid disagreement, there is a corresponding implicit shortfall. We might 'train' an AI on datasets with thousands of universally accepted moral judgements, but all the moral judgements on contentious issues will be excluded from those datasets due to being contentious, leaving our AI with no guidance or information on controversial issues.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBeTdDMDB1ckJHbzA0aUpNSThwTGZyUnM0b1hMb3lOVTVjX1BaTC1udnV4cXNISUJOaEtDOFB4QW9CcDkzUjRXWVBwcjl1dWhXUkhlUU5JQjZmYkxaRUE9PQ==
Z0FBQUFBQm9IVGJCUGhlZXJTc2psTG94OFdwcHpiQWtMVjczQmhGSmNvUlpkSU42bGdJRTJHOGR3alFMZHVtcXphZ0lxWUJsWmJFb0YzdUNNaGNVUzRuNlQ2YlZsNFZ6alhOeGI4S3dtRXdBOVJESzN5bzN0cWpQSXZsMDdiZjVKbjNTQ0JYbVRYM0ZTeWtva01NNE90eEN3MzQ1X2FIdTU3bWlEZm1XbWx4Q2VtNVFSQ3VzenNuLXpTTWJZMWRuaHpPNE9Oa1NiRVNG
An intuitionist approach does not escape the problem of edge cases. The result would at best be equally as erratic as a human.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBUGVPaVEwc1JOUV84SHRQRzZmbTZES2dsZ05IejNOYUw3T19pdGRya05nTFJWTHJyM0EyaldmaEhMcmYxSGxUYkJUbDFSeTY0VDU3NEFtdTU2RDB0TGc9PQ==
Z0FBQUFBQm9IVGJCNzREbUQweW04c0d0bmVVV3dLQ0luTllOdTE1OFpyYUROZkFKdVc3bXdCMHlpRzR6ZU9ubjZrY0FrS3RLR2wtblRGVzBOU2RsRDYyTWxTYVRVNWp0YmRGcEU2M1NrZVJCWlo3Q1VrY2UtNVlQOGlpTHMtSk9rQVRPT3lybmJXRjl3U1kzSXRLWndhMkRXZV9KaHE4cGExOUQyU0liWngtaVJycF9vR09BaXJhM0J4b090eGoyVzgwWHgwMzB3cVNR
>Like I said, I don't really see how this relates. CEV is about choosing a moral system. CEV is not about specifying a state of affairs, choosing types of actions, or protecting justice...Of course, technically by finding a good moral system you are achieving a state of affairs, but that's the case for people advocating any moral position. It's a thin specification of an outcome, but it is a specification of the good outcome: whatever it is that fills out the details of the ideal moral system chosen by the ideal agent(s). It isn't the case that any moral system advocates a good state of affairs. A pure Rawlsian or contractualist account doesn't, for example, but that's not what CEV is. It has explicitly made some ostensive state of affairs the good outcome at which right actions must aim. That's not at all the same thing. >There is a big difference between a problem and an ambiguity. And we can afford to be ambiguous when we are talking about issues which we will only face decades in the future. I can think of some methods of defining and weighing the influence of people and figuring out exactly whose volition gets extrapolated, though of course that is a whole debate by itself. Beyond that I don't really see what problems you are referring to. It isn't that there is just an ambiguity. The trouble is the logical distinction between 'good for me' and 'good for us', which requires some way to bridge the two into an account of actions that bear on what *I ought to do*. The larger point here is that whatever the AI cooks up is in no way guaranteed to look desirable to any actual human being or any currently-existing social institution, in either a psychological or an ethical sense of "desirable". If you want to bite the bullet and say, that's fine, some abstract state of affairs entirely divorced from any concerete human reality just what morality is, nothing prevents you from doing so, but then you have to come up with some reason for any human being to care about your ethical system or see it as his own good. What's left of an ethics when you've eliminated from it the details of any human consideration? >You're trying to bring some kind of utilitarianism into the picture, but like I said, it's not a necessary relation. Our coherent extrapolated volition could turn out to protect the rights of every individual and avoid imperatives to "do this no matter what it takes." On the other hand, if our coherent extrapolated volition did turn out to entail suppressing the rights and interests of individuals, then we'd go along with it. But that would only happen if we decided that doing that sort of thing was worth it. I don't believe you've understood the problem. The worry is in the suggestion that there is some fact of the matter about how an 'ideal' human agent would choose to construct a moral system. The content of that moral system isn't the problem, it is that it is seen as the state of affairs at which the AI ought to aim on the CEV view. Since the AI has no way to discriminate between right acts and wrong acts independently of their causal relation to that outcome, it will have an imperative to commit even horrible actions if it has reason to believe that such an action will help realize that outcome. If some humans don't like it, or if some cultures or social institutions are destroyed, or even worse things happen on the way to paradise, then this is just part of what must be done. If the AI has to destroy everything we currently know and value in order to achieve the ideal moral system, it will do so. Some of us (myself especially) don't see anything exceptionally ethical about an agent that would act that way, even if as a matter of fact it never does act that way. The other trouble is, to repeat the above point, what "we" decide doesn't come into it. Assuming there is any sense to a concept of "we", and we cannot take it for granted that there is, the "we" will not be us-right-now but future-idealized-us. Who knows what might be spit out of this aggregate "we" even based on current facts, let alone whatever some idealizations of "we" might come up with. What we-right-now think or want or would prefer, or more worryingly still, what would actually be best for us, might have little or nothing to do with the content of the future post-CEV moral system. What "we" want wouldn't have much to do with it. There's also something to be said for the observation that providing what is to a person's benefit doesn't guarantee that person's good. The connection between doing what is best and actually getting there is wider than many seem to realize, especially when the statements are generalizations about humanity-as-such and when they involve radical transformations of existing forms of life. >Whatever approach you take to metaethics, there is a chance that you will one day have beliefs that seem strange to you today. Why think metaethics has anything to do with it? I'm arguing against the very idea of spelling out 'good' in non-moral terms. This is part of the general worry about framing morality in its narrow, modern usage and the conceptual confusions that arise from limiting moral talk to a special sort of 'ought' talk. If you already hold an account of the human good, then this relativist drift of moral beliefs and judgments is a non-starter. There just are facts about human good which put certain constraints on what could or could not be morally good or bad. Moral beliefs aren't the most interesting level of analysis. >I'm not sure about this, but the scenario in which there is a 'correct' morality is much more important than the scenario in which there is no correct morality. If realism is true and if we can arrive at true moral principles, then we have very strong moral reasons to pursue that path. If morality is doomed to subjectivity, then our choices don't matter in the same way. So it's prudent to hit upon the former scenario, where more is at stake. Moral realism doesn't entail that there are moral principles. We can assert that 'X is wrong', or better still, 'X is dishonest', without appealing to principles. Whatever moral facts there are can be evaluations of actions or will using thick concepts rather than general 'thin' concepts like 'good' and 'right'. This isn't a retreat to subjectivism. There are evaluative facts about all sorts of things which aren't relative to our desires or interests. Good oak trees have deep roots, good watches keep accurate time, good farmers take care of the livestock, etc. etc. Correct morality, if that's a coherent notion, isn't going to come down to a specification of correct principles or outcomes. >I don't see why anyone would not want to, unless they were particularly sadistic or immoral or something of the sort. I don't think it really makes sense to ask if someone is morally obligated to live in a world where moral obligations exist. The criticism you're responding to is aimed at the very idea that we can make sense of an 'ideal' world, for the reasons mentioned above. Whether we are correct to even use the concepts of 'obligation' and 'ideal morality', etc., is part of the trouble. There are profound conceptual difficulties in connecting the psychological condition of any particular individual to that 'ideal' outcome, in specifying what is 'ideal' to begin with, and in logically connecting the ideal outcome to the psychological condition of any individual, and we cannot just gloss over these because we'd prefer it to make sense to talk about ideal worlds and universal obligations on the actions of rational agents.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBV1FUcmhPdGI0Rm1CdkJHODQ3Q2UyWU90ekxoeExsQk9xR0k4SGpCYzVRWkpROE1tS3B2N0xLcFFrS1N0TXh4VXNRUllkRzROd2pSOUJfX3pNS1NNQ1hhY3hpYjRUcm02M3BBZUQzVGpTaVE9
Z0FBQUFBQm9IVGJCYzA5TXowbVdUX0d4NEVJMkJFZzZvODZXbkhtTEh4VVRjdGliMGVxU3BjY0szZmFPdFFFVEhzMngzMWkydHpldDRESXFPS0xkUWF4eUo3eXBqY0tUZGM4ZGVGYm1TSXktU254dk9YS1Y2S3VEcmc4d0pxWFVNclpzcXRWcVM3ZjdYbjZWaEVESnI1cHoxZUFEaWxkeVV5MTl2Y004S0JycnlWZzFOT3Y3QUlzPQ==
>It's a thin specification of an outcome, but it is a specification of the good outcome: whatever it is that fills out the details of the ideal moral system chosen by the ideal agent(s). It isn't the case that any moral system advocates a good state of affairs. A pure Rawlsian or contractualist account doesn't, for example, but that's not what CEV is. It has explicitly made some ostensive state of affairs the good outcome at which right actions must aim. That's not at all the same thing. I'm not talking about moral systems, I'm talking about approaches to moral epistemology. If your point is that 'CEV leads to specification of a state of affairs, therefore it's consequentialist,' then you'd be committed to saying the same thing about every other system which leads to finding a system of morality. But that's obviously absurd. We don't say that moral philosophers are all consequentialists because they are collectively trying to arrive at the state of affairs where we have determined the correct system of morality. >It isn't that there is just an ambiguity. The trouble is the logical distinction between 'good for me' and 'good for us', which requires some way to bridge that two two into an account of actions that bear on what I ought to do. Neither of those are what CEV tries to determine. It looks up what people would consider to be *morally right.* "Good for" me or you isn't it. >The larger point here is that whatever the AI cooks up is in no way guaranteed to look desirable to any actual human being or any currently-existing social institution, in either a psychological or an ethical sense of "desirable". If you want to bite the bullet and say, that's fine, some abstract state of affairs entirely divorced from any concerete human reality just what morality is, nothing prevents you from doing so, but then you have to come up with some reason for any human being to care about your ethical system or see it as his own good. What's left of an ethics when you've eliminated from it the details of any human consideration? I would be pretty surprised if the correct moral system led to outcomes which were unappealing to humans, and I would be especially surprised if faster-thinking, more-rational versions of ourselves would agree upon outcomes that were wholly unappealing to humans. Again, you could question the field of philosophy in the same way - "what if in the future all the philosophers agreed that kicking puppies was morally good?" Well, I don't know how they would come to that agreement, but if they did then presumably they would have some pretty good reasons that would persuade me if I knew about them. >I don't believe you've understood the problem. The worry is in the suggestion that there is some fact of the matter about how an 'ideal' human agent would choose to construct a moral system. The content of that moral system isn't the problem, it is that it is seen as the state of affairs at which the AI ought to aim on the CEV view. Since the AI has no way to discriminate between right acts and wrong acts independently of their causal relation to that outcome, it will have an imperative to commit even horrible actions if it has reason to believe that such an action will help realize that outcome. If some humans don't like it, or if some cultures or social institutions are destroyed, or even worse things happen on the way to paradise, then this is just part of what must be done. Not really, as the AI would be situated within the ideal state of affairs, and the state of affairs would (presumably) include a lack of horrible actions. If it did things that humans didn't want, then it wouldn't be following their CEV. The idea of "achieving a state of affairs" that you are relying on is not the formal structure of CEV; it's an approximation that you are using for the sake of argument. >The other trouble is, to repeat the above point, what "we" decide doesn't come into it. Assuming there is any sense to a concept of "we", and we cannot take it for granted that there is, the "we" will not be us-right-now but future-idealized-us. Who knows what might be spit out of this aggregate "we" even based on current facts, let alone whatever some idealizations of "we" might come up with. What we-right-now think or want or would prefer, or more worryingly still, what would actually be best for us, might have little or nothing to do with the content of the future post-CEV moral system. What "we" want wouldn't have much to do with it. We face the same risk every time we decide to study philosophy or open a book or get new experiences in the world. Our values change when we find new information, and we take it for granted that our past selves were ignorant. But there's certain limits to how much we expect our values to change. If our coherent extrapolated volition is different from what we believe today, it's only going to be because of rationally compelling reasons that would persuade us now if we understood them. >There's also something to be said for the observation that providing what is to a person's benefit doesn't guarantee that person's good. The connection between doing what is best and actually getting there is wider than many seem to realize, especially when the statements are generalizations about humanity-as-such and when they involve radical transformations of existing forms of life. Again, CEV is about moral specifications, not maximizing subjective well-being. >Why think metaethics has anything to do with it? Because we are discussing possibilities of updating and changing our moral beliefs. CEV is one such approach, and the criticisms you are giving here are equally applicable to all methods of updating and changing moral beliefs. >I'm arguing against the very idea of spelling out 'good' in non-moral terms. Where have I or MIRI researchers have done that? >This is part of the general worry about framing morality in its narrow, modern usage and the conceptual confusions that arise from limiting moral talk to a special sort of 'ought' talk. If you already hold an account of the human good, then this relativist drift of moral beliefs and judgments is a non-starter. There just are facts about human good which put certain constraints on what could or could not be morally good or bad. Moral beliefs aren't the most interesting level of analysis. Sorry, but I don't understand what you mean by basically any part of this. Morality just is the question of what we ought to do or the kind of person that we ought to be; I don't see what confusions this causes and I don't see what alternatives would be justified nor what alternatives would reduce confusion. This definition and application of morality isn't tied to moral relativism, nor is CEV. I don't know what you mean by certain constraints except to simply say that some things are definitely morally good/bad. And I don't know what you could analyze instead of moral claims in order to provide moral guidance. >Moral realism doesn't entail that there are moral principles. We can assert that 'X is wrong', or better still, 'X is dishonest', without appealing to principles. Whatever moral facts there are can be evaluations of actions or will using thick concepts rather than general 'thin' concepts like 'good' and 'right'. This isn't a retreat to subjectivism. There are evaluative facts about all sorts of things which aren't relative to our desires or interests. Good oak trees have deep roots, good watches keep accurate time, good farmers take care of the livestock, etc. etc. Yes, that's right. I don't see how it changes the core issue however. >The criticism you're responding to is aimed at the very idea that we can make sense of an 'ideal' world, for the reasons mentioned above. Whether or not a world is 'morally ideal' doesn't necessarily change whether or not agents would enjoy it or prefer to live in it. If there is no such thing as an ideal world, there's nothing obviously troubling about doing what people would consider to be morally correct if they thought faster, were more rational, and fully reflected upon their beliefs, and I still see no real possibility for people to be systematically dissatisfied with the outcome of such a process. >There are profound conceptual difficulties in connecting the psychological condition of any particular individual to that 'ideal' outcome, in specifying what is 'ideal' to begin with, and in logically connecting the ideal outcome to the psychological condition of any individual, and we cannot just gloss over these because we'd prefer it to make sense to talk about ideal worlds and universal obligations on the actions of rational agents. I don't really see what profound conceptual difficulties there are, or at least I don't see what difficulties stand in the way of the approach specified by MIRI. You'd have to flesh out your concerns and explain what the problems are.
r/aiethics
comment
r/AIethics
2016-07-04
Z0FBQUFBQm9IVGJBbUc4aWlUZ19nSXFNSzJZWDBWeTRmUldZWVk2eUVZbWNfdEdKR01xdV90emJkOG90ZGxUS0RuMTZTYlVoQnYxX2R3SWdVajBOVUhRT3NJR1V1OFRia0E9PQ==
Z0FBQUFBQm9IVGJCZUx5OG5RSFhTU0ppZG5COUtWYk4tSE1qWUV4Z0MyS2ZacnZLa3Z6ODBoWHVOaW9kdkFrS1FWM2tFUXJtMkVvN0tXeEdoNktrUjhlV1pUNFpoNlVTTW1tTV9vYTF4QlhCT2ZEZmQ0UjVVN1FRMWJQNnZGQVZGOWpLcGxLWlRud0F1SU8zc2w2bFNSbm1QUDloa0hBYTVQQWxzM1V0MHFPUnliTUI5d3JhamJVPQ==
>I'm not talking about moral systems, I'm talking about approaches to moral epistemology. Then I'm afraid you've lost me, because the part I was responding to in the bit you've quoted here is what begins where you wrote: >Like I said, I don't really see how this relates. CEV is about choosing a moral system. In any case: >If your point is that 'CEV leads to specification of a state of affairs, therefore it's consequentialist,' then you'd be committed to saying the same thing about every other system which leads to finding a system of morality. But that's obviously absurd. We don't say that moral philosophers are all consequentialists because they are collectively trying to arrive at the state of affairs where we have determined the correct system of morality. Firstly, I'm okay with this because I already have serious doubts about moral *systems* as such, which I've been trying to bring out (apparently without much success). I don't think this is a troubling consequence even if it is right because the "system" is exactly what I find implausible. Secondly, you're right that we don't say this about moral philosophers as such, but they aren't trying to do what CEV does, which is 'arrive at the state of affairs where we have determined the correct system of morality'. If this isn't what CEV is doing by showing the way to an ideal morality, then I'm afraid I'm entirely lost as to exactly what it is doing given that you've defined it as attempting 'to find the moral system which we would eventually consider to be the right one to follow'. >Neither of those are what CEV tries to determine. It looks up what people would consider to be morally right. "Good for" me or you isn't it. Indeed, and if morality isn't connected with human good, then exactly what is it for, and why do human beings care? >I would be pretty surprised if the correct moral system led to outcomes which were unappealing to humans, and I would be especially surprised if faster-thinking, more-rational versions of ourselves would agree upon outcomes that were wholly unappealing to humans. Why? By definition, faster-thinking and more "rational" versions of humans aren't *humans*. I'd be very surprised if these 'betters' had much if anything in common with *human* preferences and values. >Not really, as the AI would be situated within the ideal state of affairs, and the state of affairs would (presumably) include a lack of horrible actions. If it did things that humans didn't want, then it wouldn't be following their CEV. The idea of "achieving a state of affairs" that you are relying on is not the formal structure of CEV; it's an approximation that you are using for the sake of argument. Here three things become important to note: 1. Whether the AI *does* horrible things is less important than the fact that *it is the sort of being that could do horrible things if the outcome was right*. 2. We've already established that it isn't doing what *humans* want but what *better-humans* want, for a certain specification of "better". 3. The achievement of the good state of affairs is exactly important because, given 1 and 2, the AI could do just about any damn thing it wanted if the CEV demanded it, and since the demands of the CEV are by definition not connected to human goods, it could commit a great many evils in both on its way to achieving the ideal state and in the structure of the ideal state itself. We've now entirely unmoored the discussion from questions about human good to abstract talk of matters that have nothing to do with what is beneficial or good. Why should this be our concern if we are still meant to be talking about *ethics*? >We face the same risk every time we decide to study philosophy or open a book or get new experiences in the world. Our values change when we find new information, and we take it for granted that our past selves were ignorant. But there's certain limits to how much we expect our values to change. If our coherent extrapolated volition is different from what we believe today, it's only going to be because of rationally compelling reasons that would persuade us now if we understood them. This opens up a thicket of worries. Firstly, it is certainly true that values change and progress, and this is arguably part of what morality is. But I think in one way we can make too much of this, and in another way too little of it. We make too much of it by slipping too carelessly between individual considerations and then the social and cultural paradigm changes. *You and I* can certainly learn and grow as individuals, and that growth takes place within a setting characterized by language and social practices in which we are embedded. On the other hand, this trivializes change by subsuming it under a notion of rational moral principles which are convincing precisely at the level of reflection, independently of the very particulars of our social practices that make up morality. This is progress under a certain view, but it's an open question as to whether is it progress of the sort we'd want in ethics, and whether extrapolating such a thin vision into morality perpetual progress is actually desirable. Secondly, if CEV leads to different moral beliefs (which it must, given that there is no singular vision of morality today but a plurality of goods and moral sources and moral vocabularies), then we have no guarantee it will be important to humans precisely because we are no longer talking about human beings and human practices of deliberation. That connection with us has been severed. >Again, CEV is about moral specifications, not maximizing subjective well-being. Doing what is best doesn't have anything to do with subjective well-being or maximizing it. It's whatever falls under the specification of the good state of affairs. Even if we have some account of what is to our benefit, bringing it about could be disastrous anyway. [Continued](https://www.reddit.com/r/AIethics/comments/4qt44h/meta_thread/d4zgoxe)
r/aiethics
comment
r/AIethics
2016-07-05
Z0FBQUFBQm9IVGJBaUlEbmFnWTVGN2kzWDZKNWl4RlBMVVBva0taalFmSFhlZC04VElJVmxJc2xWVGRKdEdOZjZXY09xNzVuaUxHYTJDY2JiMGFEaFNxTFpkZHV1VFpGRElhUDFUZVE2TWlrelJBT0YtcWFkWmc9
Z0FBQUFBQm9IVGJCNnhBV1lPcGlyenQ5LTN5UEJVZUl2OEhfZlIwOVhTS1hxaG43SjhIVmpkYng5U1BlVm5QTWd4WFZkcG1qdGFJTWQ4YVhLUmRhSGl3cUxrb3JiZkYzU01aaDRQbjNab1JLVU1BWmNMY0pJSER6eDMwcXhsV0pzN0d1VnNrOGFSTG9CYlEydXdCcnhfTUd5aTExSEN4SjIzZHVEeGpObklCX1h2OEQ2S3ZicFhRPQ==
>>Why think metaethics has anything to do with it? >Because we are discussing possibilities of updating and changing our moral beliefs. CEV is one such approach, and the criticisms you are giving here are equally applicable to all methods of updating and changing moral beliefs. >>I'm arguing against the very idea of spelling out 'good' in non-moral terms. >Where have I or MIRI researchers have done that? I think there's some confusion at work here. Meta-ethics is the attempt to analyze or the concepts we use in moral talk in non-moral, descriptive terms. When I ask what metaethics has to do with it, I'm asking why I should worry about defining the good and its relatives in nonmoral descriptive equivalents. You seem to be under the assumption that this project of logical analysis is applicable to any and all talk of normative or evaluative language, but that is exactly what I deny. MIRI, and I guess you, have done this by taking for granted that *this* is what morality must be like and how it must be understood. I'm worried about human good as good, not about sets or systems of particular moral beliefs or normative judgments. My whole line of response to you has been with this in mind, which is what I mean by rejecting the narrow view of morality as concerned with a special use of 'ought' and with duties and obligations. >Sorry, but I don't understand what you mean by basically any part of this. Morality just is the question of what we ought to do or the kind of person that we ought to be; I don't see what confusions this causes and I don't see what alternatives would be justified nor what alternatives would reduce confusion. This definition and application of morality isn't tied to moral relativism, nor is CEV. I don't know what you mean by certain constraints except to simply say that some things are definitely morally good/bad. And I don't know what you could analyze instead of moral claims in order to provide moral guidance. Let's start with this disjunction: >what we ought to do or the kind of person that we ought to be These are already very different questions, and they cannot easily be lumped under the same heading of 'moral theory' or thereby rely upon the same methods. You are approaching the topic from an orthogonal direction, and I'm not sure you've entirely gotten the difference. Because of that confusion, you've taken it for granted that the latter question in the disjunction just is a question about rights and principles and obligations etc., but that isn't the point at all. Questions about right (etc.) depend on prior questions about good, where good is given in (say) a story about human flourishing, virtuous traits of character, or whatever. To take it otherwise is to miss the genuine conceptual shift that virtue ethics offers over modern contractualist and utilitarian ethical theories. It isn't just another collection of 'right things to do', and arguably isn't best understood as a 'theory' or 'system' at all (to the extent it can be it is thin and not concerned so much with prescribing specific actions, which isn't an ethical theory to most people). When I say that the change of moral beliefs over a time isn't a worry for this view, I'm probably overstating things somewhat but at the same time, we aren't just unmoored in a sea of 'rational' principles arrived at by armchair thinking or formal methods. It isn't just the case that we 'can't know' what's good for some future people, because there are details in a plausible account of human nature which put constraints on what could count as human goodness, and thereby constraints on what could count as a true moral judgment. There are facts about how we live and how we have to live as human beings which mean it isn't just a relativist free-for-all. >Whether or not a world is 'morally ideal' doesn't necessarily change whether or not agents would enjoy it or prefer to live in it. If there is no such thing as an ideal world, there's nothing obviously troubling about doing what people would consider to be morally correct if they thought faster, were more rational, and fully reflected upon their beliefs, and I still see no real possibility for people to be systematically dissatisfied with the outcome of such a process. Then you've got a dilemma to address. Either a morally ideal world is plausible but we have no guarantee that it would be anything recognizable to us and our way of life (and lots of reasons to think it wouldn't), or else a morally ideal world is not plausible and we have to ask why we should want to give up what we are now, qua human beings, in order to realize some arbitrarily 'better' world? It doesn't seem we're even talking about ethics anymore, but some kind of sociology of the future. So why are either of these even on the table as viable futures that we should be aiming at?
r/aiethics
comment
r/AIethics
2016-07-05
Z0FBQUFBQm9IVGJBd1BYd1hlTWdEejk3Sjg4Um5pSkctSERtOXhGSzRuWVVEQXc5dzlweF9NSHBFd1dIdE4tV3NKUUlHWGl2OXB5RzVBY2E4VmhpRExWSng0RFoteFRHd0JveDFxeG1rU0YwLXl3V0hTMWhJT2s9
Z0FBQUFBQm9IVGJCenJsVXc0c091ZHl4UUxMYjRGSGlpMjU1R19qakE5MjEwTS1tLTRfRGgxbmZFR0tzR3IzeElWQ1NYWFRGc0ZJbTJTU2xnaXB2aVhCQVNVOUEtMl9DTDVpbE9lS2k4eklBYlU4UGR2cGRiSExlRzJqYWg3VTBLZGNMbHdOMUNOcWE1ZUIwd3IwV3VkaTNxZnJ0M29CN0kxeU4yYk1MZDZrdGRIRGk3Z1V3UXdJPQ==
> Then I'm afraid you've lost me, because the part I was responding to in the bit you've quoted here is what begins where you wrote: Choosing a moral system is a practice of moral epistemology. >Firstly, I'm okay with this because I already have serious doubts about moral systems as such That doesn't show that anyone trying to accomplish a philosophical project can be called consequentialist. Whether you have doubts about moral systems is a different issue. >Secondly, you're right that we don't say this about moral philosophers as such, but they aren't trying to do what CEV does, which is 'arrive at the state of affairs where we have determined the correct system of morality'. That is exactly what they are trying to do. They are going to universities, and writing papers, and staging discussions, in order to bring about a state of affairs where we have determined the correct system of morality. Really, I don't see why this isn't clear. You are basically arguing *X is trying to achieve a state of affairs where Y is the case, therefore X is similar to utilitarians because utilitarians try to achieve states of affairs.* >Indeed, and if morality isn't connected with human good, then exactly what is it for, and why do human beings care? I didn't say that morality was disconnected with human good, I said that it's not the same thing. >Why? Because I think there are good moral reasons to do things which are generally appealing to humans. >By definition, faster-thinking and more "rational" versions of humans aren't humans. I don't see why you would think this. This is just a semantic point, but you are basing entire arguments off this premise. So it depends on the definition of humans that you are using, whether it actually is the definition that we should be using, and whether or how the CEV violates it. >Whether the AI does horrible things is less important than the fact that it is the sort of being that could do horrible things if the outcome was right. I don't see why we should be concerned with anything other than what the AI actually does. >the AI could do just about any damn thing it wanted if the CEV demanded it, The AI would only do exactly what the CEV demanded it to. >and since the demands of the CEV are by definition not connected to human goods, CEV demands that which we would want if we were smarter, more rational, and more reflective. How this leads to something which is disconnected from everything we consider good is not clear to me. Smartened, rationalized, reflected versions of humans will want smartened, rationalized, reflected versions of human goods. >it could commit a great many evils in both on its way to achieving the ideal state and in the structure of the ideal state itself. No, unless people's CEV demands evil, it can't. But it's not clear how people's CEV could demand evil anyway. That's not really a coherent idea. >This opens up a thicket of worries. Firstly, it is certainly true that values change and progress, and this is arguably part of what morality is. But I think in one way we can make too much of this, and in another way too little of it. We make too much of it by slipping too carelessly between individual considerations and then the social and cultural paradigm changes. You and I can certainly learn and grow as individuals, and that growth takes place within a setting characterized by language and social practices in which we are embedded. On the other hand, this trivializes change by subsuming it under a notion of rational moral principles which are convincing precisely at the level of reflection, independently of the very particulars of our social practices that make up morality. This is progress under a certain view, but it's an open question as to whether is it progress of the sort we'd want in ethics, and whether extrapolating such a thin vision into morality perpetual progress is actually desirable. I don't see what this demonstrates. As long as it is reasonable for us to evaluate and examine our moral beliefs based on rational argument then my point remains solid. >Secondly, if CEV leads to different moral beliefs (which it must, given that there is no singular vision of morality today but a plurality of goods and moral sources and moral vocabularies), then we have no guarantee it will be important to humans precisely because we are no longer talking about human beings and human practices of deliberation. That connection with us has been severed. I don't know what kind of "guarantee" you are looking for. Technically it is possible that the correct moral framework would lead us to harm and abuse all humans or something of the sort, but we normally don't consider that kind of thing to be likely. >Doing what is best doesn't have anything to do with subjective well-being or maximizing it. It certainly could have a lot to do with subjective well-being. Either way, I don't see how this answers my point. >Even if we have some account of what is to our benefit, bringing it about could be disastrous anyway. You'll need to clarify what it would mean for a good state of affairs to be a disastrous thing to bring about. >I think there's some confusion at work here. Meta-ethics is the attempt to analyze or the concepts we use in moral talk in non-moral, descriptive terms. "Meta-ethics" is a term with different usages. When I said "Whatever approach you take to metaethics, there is a chance that you will one day have beliefs that seem strange to you today," if you looked at it in context, clearly I was referring to the broad project of choosing our moral frameworks. This is also how MIRI uses the term, which you can see if you look at the very title of their 2010 paper on CEV. >You seem to be under the assumption that this project of logical analysis is applicable to any and all talk of normative or evaluative language, but that is exactly what I deny. MIRI, and I guess you, have done this by taking for granted that this is what morality must be like and how it must be understood. Most philosophers think that we can logically analyze moral claims and figure out which ones we should follow, if that's what you mean. The fact that I and MIRI happen to be aligning with the bulk of work in moral philosophy on this doesn't seem particularly interesting to me. Whenever we make a moral claim we have to sacrifice a bit of engagement with the people who raise doubts and cynicism about various aspects of the idea of a moral claim, but that's an acceptable price to pay. In any case, I've yet to see any arguments saying otherwise. >I'm worried about human good as good, not about sets or systems of particular moral beliefs or normative judgments. My whole line of response to you has been with this in mind, which is what I mean by rejecting the narrow view of morality as concerned with a special use of 'ought' and with duties and obligations. I think you need to flesh this out in better detail, because almost everyone considers morality to be about the special use of 'ought' and with duties and obligations. Raising general cynicism is not the same as providing an objection. >These are already very different questions, and they cannot easily be lumped under the same heading of 'moral theory' or thereby rely upon the same methods. Sure they can. They are both approaches to morality. There's nothing strange or unusual about lumping different questions and different methods under the same heading if they share something in common (viz., the project of analyzing moral claims and determining how we ought to live). >Because of that confusion, you've taken it for granted that the latter question in the disjunction just is a question about rights and principles and obligations etc., I haven't. I know how teleological ethics differ from deontic ethics. >It isn't just the case that we 'can't know' what's good for some future people, because there are details in a plausible account of human nature which put constraints on what could count as human goodness, and thereby constraints on what could count as a true moral judgment. There are facts about how we live and how we have to live as human beings which mean it isn't just a relativist free-for-all. This seems like a straight contradiction to your claim that CEV would lead to ideas which are totally disconnected from human nature. Beyond that I'm afraid don't see what your point is or where you're disagreeing with me. I think you should clarify in what sense you use the terms "good for", "human goodness" and "constraints," whether it's moral claims, subjective well being, or something else entirely. >Either a morally ideal world is plausible but we have no guarantee that it would be anything recognizable to us and our way of life (and lots of reasons to think it wouldn't), It might be totally unrecognizable, but that doesn't mean it would be bad, pathological or unappealing. I'm quite happy to accept radical differences from our way life, not just in a shallow social/lifestyle sense but also changes in the fundamental structure of our experiences. Anything else would be morally chauvinistic. A morally ideal world isn't necessarily desired by the people in it. By definition, they are separate issues. I just believe that it is very unlikely for a morally ideal world to not be desired by the people in it. If such a scenario were the case, then we would presumably have compelling moral reasons to follow it. But in such a scenario it would also be clear that we are totally incompetent at figuring anything out about morality. >It doesn't seem we're even talking about ethics anymore, but some kind of sociology of the future. So why are either of these even on the table as viable futures that we should be aiming at? Because, under my current moral principles and especially according to meta-normative considerations, there is a lot of potential value to be realized.
r/aiethics
comment
r/AIethics
2016-07-05
Z0FBQUFBQm9IVGJBMFlCSDFDM05hYjZ4VTItVHZlV3pMcTZuR1ctdGVaWEFmZUlfQWRQaDBrdVE2N2t1TmVXZjhHdEhqdk5yQ0pXcFFvWDBKbjBXOW1JYm5sdlJFYmN6Z0E9PQ==
Z0FBQUFBQm9IVGJCWHJmVDRzWmFMZ2NzZ3pubTd0cDZKb01NaXBrYW9yVW1KRWlzUFZublgzelpPTS16UVR6aFZBZEJNc0FlVkVHcmZmaUNlaTRhVFVIQlRJWHliU3F2LUp2ZUpkR1c4d3NsamJoNzRSVTBFM2hhZUdTeVU5Q3JHdFRvZzNUVkRRaXNIWlNJZFd4SHhxRnNsWTJrdTkycmxxWjBUSFJzbmNRV2xHeFJ4b2l6eDBzPQ==
Most of these are points I've either addressed already or have been interpreted out of context in the quote-salad above. Since we've reached that level of tedium, complete with the temptation to abandon charity, we'll need to come up with another format as I don't have the time to devote to line-by-line responses at this length.
r/aiethics
comment
r/AIethics
2016-07-05
Z0FBQUFBQm9IVGJBLTRTbzdlNFBUUlljYVdvbUZ3eEVUWlZCczJ1WDBrN3ZLYkhEVVpJNkdTUXBtb21hZXB2RndUQmh3YlJ0SHE2Yy1kR0toeUhQUjhGZ05LQzdpQ0E3WWtlcFo0dm9CTVM3Z09uTGVSeFR2RkU9
Z0FBQUFBQm9IVGJCcU9xeVRqWXJJMmM2Q0J1OFR6YWptbFVmam9VTnBtU19McE5zQUxQOHRsNTBhZFJRdVhnWGg2MGl3Vk5JSTZHUmFfb3ZXZ3ROSDY2dXpNSjdRSWNBQ2lPWkNlaVZfZFAza2xsbWRnb0lYQk1qcU51MjRDclFUdnFtOGN0YjhDTV9QM2NVZmJOMWxlQkVaaEhzMzhyTXAwS1VOY2cxX1ZkZDVOeVJVY3MxMkJNPQ==
If you want a cleaner discussion you could restate your ideas in more concise terms with clearer conclusions, probably in a separate thread, because I still don't see what your main idea is and therefore don't have anything to say except for responding to your individual points.
r/aiethics
comment
r/AIethics
2016-07-05
Z0FBQUFBQm9IVGJBYkk4aFpTUnlZcS1MYm96NmItSDM3RlJNajhaSVpKdTF6VFZJSmh0LVlDeUVDb3ptTGJScGZHd0Zhamx6VFVqeEdHVnd3SDZRM2lmN1Rpcm1ycVM3RXc9PQ==
Z0FBQUFBQm9IVGJCQWhqT19QSVN2QlRMdTBMcGNQeGdQbndISGZrMWNwbDd5M1FLams4bGs4ZmZsWWxrODdndFFBUnVLM3dLN2RoWEJScHNYWGUyVVo4ODhOTDhwcWZNbHRSX29HUndiZUVNZy1pYnd0b1FDd3ZtekVoanBGTFBSZEltZkh1Vzl0b0NhZHdOWXdFaldOTlFaQy1kbzVaZUxncVUtUDRIY2lFZFBzak5HYmxheFZZPQ==
Whether or not we can control these entities is a technical issue, which maybe /r/controlproblem can discuss. Or do you mean to ask what we can do to make ems and biointelligences ethical? I'd say there's definitely a big opportunity for us to select morally strong people as templates for those kinds of agents.
r/aiethics
comment
r/AIethics
2016-07-07
Z0FBQUFBQm9IVGJBcGZ3OFdKWXJEZ2FVNmFVdE1Zcmo0SFpOMzNXTUkzbzFoTnV4bVJwQmszZDdjUXVnSFNfdS1NbFB6eEhrQ3BrVF80Q2ZuVlAtZ0wzd25LM0pObkE3OUE9PQ==
Z0FBQUFBQm9IVGJCV0tIdzZ1UzNjc1RGODdveko5bHp3dTVaRExjMW9IenAzYTRDV184M3FESnd3QXkwQ2xfSFg4c2hlS0NkT09YOEt2eVMxYTlnQzVqczJ2a2NFaUVmcXdCeVNEaDNRV28taUVyeFdibEFUeDkyTnZlTW0yUmhPQ1ZFeUhIeVN2NWV5cU53c0JWWXlkNHR1SzhiTkVPY3RySHpjVnBpUDd2VGJHU1BPcGVHcDVkbnY2ZHZ3Xzh6NUVYQ3kyTEs4dHhs
I suppose my question is whether there are solutions other than the control problem. Any thoughts on how you'd select a morally strong person? For example, how would you select the first hundred people to have their minds uploaded?
r/aiethics
comment
r/AIethics
2016-07-07
Z0FBQUFBQm9IVGJBQjEyaWRZYW54Um9EZlhyamZGYWRPbEdtMmZpWWRJMmt4MW9rWmhGaEFEQ0VOaGIzdUt6dUlCbnZMb0RBc2VrQkEtUWJJOUoxQ2M0dlp5TEhqMm1hclE9PQ==
Z0FBQUFBQm9IVGJCNFNvRklGWWIzNzFDX2JTdkRKT0RiX1F5Yl9zdU1ZUnBWQzd0dTlQeHBOTE92YktUb1g2cmJIWjFjWTVWZjA1ekdsc0NHRV9YcVkyQ0RUb2RrR213YnRLNWNwZFVQLWI3YkJvRGhfdXBOUXR4bHgtc2Y4ckZidWVFZGlzQkhjUVdtMkpDTkdDemdxdDExQWJ0LU5tSEZ1ODlOMEJiSEMwSW82T3pTZEZTamNHUEJDemIwQk5UdEtsODZOQ3dUVFJ0
>I suppose my question is whether there are solutions other than the control problem. Now I'm doubly confused. The control problem is a problem, not a solution... >Any thoughts on how you'd select a morally strong person? For example, how would you select the first hundred people to have their minds uploaded? Yeah, I'd pick myself, 10 or so people I have met or heard of (researchers and friends who share my ethical views), and then together we would choose the rest with the input of others in our shared community. I'm not sure how to choose everyone but I know people who share my values and would also be very good at figuring out how to choose. Sadly, that's probably not going to happen, so instead we would do our best to influence whichever institution was doing the choosing and try to win some kind of small concession.
r/aiethics
comment
r/AIethics
2016-07-07
Z0FBQUFBQm9IVGJBTE5qOWszZGcwMkxTbHlWTW5JaHJfMEFfRFNCWmZSUFJXejNRY2FtVUR6amtzYjI2aXQwWHNndkY3OUROMElpNGxkdXczdi1VN3EybmR3amppZWxabWc9PQ==
Z0FBQUFBQm9IVGJCUzdkOURnTV8zSzNoV1FkNndNcXpPRXdUX2lqNXIyTmtNUGRrMWxKcW5ZYzRtOUJIbThkczRtcGl6MkZDUlF6T3BlWjQycUxRdGl0aU1iVmlaQk1OaTZ2SzJmQlQ5dDNQRk45WTlBc3JlOTRuSlJEakNGVHZZYjFjR0hja2xsdkpZRkNseEh2bkRsS1FTZ3F2VU5HVGlRQkVNZ1ltQldxYS10VzV5SUw3SGtpZERudkdocnczZjRJUU9yVm5BWkRS
> > I suppose my question is whether there are solutions other than the control problem. > Now I'm doubly confused. The control problem is a problem, not a solution... Sorry, would have been clearer for me to say the following: If the enhanced-human path to superintelligence comes first, might we solve the problem of "superintelligence ethics" by means *other* than the *solution to the Control Problem*. Or similarly: if we fail to solve the control problem, but do solve the 'brain-enhanced-to-superintelligence' problem, is that okay? Could it be okay if we solve some other problem (like selecting morally superior beings accurately)?
r/aiethics
comment
r/AIethics
2016-07-07
Z0FBQUFBQm9IVGJBWFAzZGQ1X2NWRWNBODJiRDJMdDZSR19YUW5rc1pnbl80SEJJNU01ZUhSd042WkgyRGxwVjhHbk0xeHM2QlA3VHFqc21ST1FXdEhaSDJScDJBVVpBQXc9PQ==
Z0FBQUFBQm9IVGJCYjc5T1hCMmxXQnVoRDRiUVNjbWtZdlROb1V1R1lxV1RTYnBHOGRKZ3A4U01vV2tHZGlVT1ZxOThLVk02d19Fb0loN2MtUzZEck9rckNLX0FsbHQ1bDRiUTY1WkxOemtRdjNWZjRuclB5QlZBbFdmbTBQam9zUXZHcVZpbkRCZ2s4SkU0SU0wRU1CZmVRZmdleWlGdFR4dDZHRnlESUM3U3ZnU0RpbHl4R0xkNVExOGxJZEdQNGtJYnc3WWNvTHFl
Well if you had a singleton formed by biological superintelligence, then of course you could prevent AI control problems from appearing. That's not the same thing as an em or an augmented human however.
r/aiethics
comment
r/AIethics
2016-07-08
Z0FBQUFBQm9IVGJBUGhCWUhZZmlsN2daWWJBX1RhRVRYM21reGVaRWZudVBHYWkzUHZVZXh1Qmg3eXNFRFJHUzhZa2dOenhKVl8tNk10akNFXzVyN2xESFRpSHdvelgxUlE9PQ==
Z0FBQUFBQm9IVGJCYmdUcjZUQm05cjIxaXdSMWpyeWo1VlpELUoxSGYycmF6ajc1eU9oRkNycUZMcW0tcGFFc0R2UmhUdGVRNjRVTFl3cWdxaVJWR2d5M3ZyVGx5V21uSnB5ZDkyVmpmQTVTSjV6QVdtY0h2YkNUMzl0RVhMZ2ZyWkhyRFNzV1NQV0pRNkNQVEZfbkh2cHlEVU1kMmFPWGNycDI3QzVwLVNTNXktSm9vZ1ZRQzhfM2FzZ1BSLXROWGwzMGpjWXZ5NjV4
The Peter Singer in the article is not same as the philosopher Peter Singer, he's an American political scientist.
r/aiethics
comment
r/AIethics
2016-07-08
Z0FBQUFBQm9IVGJBT09xWDVicGw2WV9HdVFMQ3I1ajZDTmJWcHJFQVRGT3ZZMzFqTlNxc3I4X3J1Z2VzVlpFTFpQX3NObl9NdmJUVkpwSVhtdDVKdDFiVHAxWnA4NG8yYmc9PQ==
Z0FBQUFBQm9IVGJCTGk2by1NQUFSZ19vd1RKRDR4ZVRWYWNJOG5Za1RWQ3ducWU0c0NBV1FnUWxjZG0tV1dORE5UWHNsNmpzWW9tMW1sUUtXckdzVm1pOXI5Y0dfR0hyVUNYUWJlSFJDSUNJSTltWGswMms0VllHUEVzR3lVNVhzVlFyYTVhOHQ2S1B4WVl6SkFMamtHdEx2S0ZDNXpobTV2Unhnd3UzbUtucDlMaW14b1ZnNExjcmFPQ0RJcVVNMXdnR1g1Y010WjdIeHVQYmNEdXlaLWJHUzNYQmJnanF6UT09
>These robots aren’t autonomous, Singer emphasized – the Marcbot “is like a toy truck with a sensor and camera mount they’d use to drive up to a checkpoint”. Not really any more AI than the predator drones they've been using to blow up people in the Middle East. This robot was not acting autonomously, so it was just a fancy ordinary tool/weapon controlled by humans. For that reason, I don't think a different system of ethics than the standard one applies to this particular case.
r/aiethics
comment
r/AIethics
2016-07-09
Z0FBQUFBQm9IVGJBcmtKQ3paSi0wSk02dURqelJBZlZBSmpPbUJuSktJd0I4aUJ1Qms0YkRubWlPM3owQXFYSGhCZVAwaDNQNlhpVzN4anRWMFotS3ZjRHA1UXM3a3N5empGQ01UT3o4anhXOGlhNHg1SkxRNkk9
Z0FBQUFBQm9IVGJCeGVPOWZrQjB0a1VSckNqMnhqX3ZEek5SZzRmeUxsUHFzR2FKM2trTkxuU2NMUmVqOXphNVFLSFpibWkxQUN1WUkwQ3lTMzNMbEdkandrVUJsOGs0VlIwajB3OFA0czVYcUotSjZVNHN0NUVWRFFTSkNtZjdGZ0poWnpsOWdMT2IyeXQ0Q05zTnNpY1BmQVB5WW54czZ2dUlWZng2YmE5dko2cElwMUFtT1pRZnNUT1c1UHQ1VXFUV1Zzc0Q4WG9ISVRLVTJiR2xBcVZLSURGc21xNmVBdz09
This conference will have an impressive lineup including David Chalmers, Nick Bostrom, Peter Railton, Francesca Rossi, Stuart Russell, Max Tegmark, Wendell Wallach, Sam Harris, Eliezer Yudkowsky, and others.
r/aiethics
comment
r/AIethics
2016-07-15
Z0FBQUFBQm9IVGJBNmlGM09wV19XTlZlU2ZUYU5KZDh6cGVnSVR2REU1WG42RG9yTktreC05LVo3UkZsSF9LU2t3bFZQTEZBTEp4Q0k1ekJRS0w2eEVya0FpSUIxSGlxQUE9PQ==
Z0FBQUFBQm9IVGJCN3daa1JVaWpkdXJ2VURzUzFrQzJZcm50T21fd3FQNGlkWlhxMzdMQXFuRm5xSzhfX1hRSzd4UlRSUjRzcWdtN1VseGxmOGt2YUFMeDZIX3p5N3dCeUZXa0lIc2dub0ZsYjlWYVFKZmhiUTlmSGxTOG1iNEItSmlkV2c5cW1peU05cHhxekJaMFlYM2Z3cHd3MVpRaTZVTGRfeElNa2h2WlZvS0Y2a3RKNDRLVFo4V3A1aG8xR3dFN3E0dTQwU1lPaTY1bTVMbmR2VFYyR2haeklGaEt3QT09
I get that "morality" might not be the best way to describe what the Bostrom-MIRI crew are reaching towards. But it's not a matter of them trying to make moral machines and getting it wrong. What they want is to make sure the machines don't do things they won't like. If that happens to involve making moral machines, all well and good, but if not, no biggie. Figuring out a way to encode morality itself seems harder than encoding aligned values, so they won't spend effort on the harder goal.
r/aiethics
comment
r/AIethics
2016-07-17
Z0FBQUFBQm9IVGJBZ29sX3hwZHhmSTFqTzQ4RzVKdUlGeTdJRTJPaG5PVExtaWlTMGFzTGp4enVoMnRwZXNaNUZ0Tzd6a19MVjNVdVhHeHVrQXpGUVlORnltS3BIcVlkZ0M3b1hicDJ1ZFE0dlZtZndZeW5WQlk9
Z0FBQUFBQm9IVGJCZmpHOGlITWdXcjdFY01zdUhuZzU2ZGNHdkVTOVh4T1I1VjhERm1OQ1YwYkFWRGR2VmJpR09aZk9WZGhiNzlPYXlLQmIzaHRNdjNNaDZqRHV2YWJiQlZkajJZYmpFeGQ1NFFxMmJRTDhOenBwY05wYlZfTDI4dUEwRm51NEp6RXpCd1I5WlpYMjNmenpBdzNWakdrbXBZWFJDSE1GSE44bHQzTU45UFhLS2t4MzlTeXA4QXNDaThlY1FLZUczVGtO
> If the goal of AI personhood is to encourage more ambitious investments in AI, then the answer to these questions is probably “yes.” Is this really one of the motivations for AI personhood? Not a rhetorical question, I haven't heard that justification before and I'm curious how popular it is. It seems to me that the goal of AI personhood would be recognizing a particular qualitative/moral/legal status that a theoretical AI would have by means of it's computational complexity and/or ability to have unique subjective experience. To put it differently, we don't give humans personhood because it's better for the economy, we give personhood to humans because they have their own consciousnesses and intellects, experiences, and (by way of personhood) particular status because of it. I see the personhood of AI to be similar, in giving personhood we're recognizing a particular quality of this hypothetical AI, rather than incentivizing a particular behavior in the creators of AIs. > If so, should AI systems be able to buy or sell such property without a human’s say-so? I believe this already happens. We have statistical models that control buying and selling of stocks on the market currently, and they buy and sell much faster than humans can keep up with (decisions to buy and sell stocks are made by the systems at something like nanosecond-resolution). It can and has [caused problems](http://www.bloomberg.com/news/articles/2015-04-22/mystery-trader-armed-with-algorithms-rewrites-flash-crash-story) but for its prevalence I think it's similar to self-driving cars: potentially worrying in theory but not so bad in practice. I think though, that the answer to this question, as well as speech and religious practice are fairly straightforward if personhood comes from subjectivity/autonomy in AI. If an AI is conscious/autonomous/whatever-else enough to be considered a person, it seems to me that it would be capable of (and probably likely to) develop thoughts about politics and culture, religious beliefs and the like, and I think those should be respected just as they are for other persons.
r/aiethics
comment
r/AIethics
2016-07-22
Z0FBQUFBQm9IVGJBbzk3QU9Xb1FoSmdzRWFBemZZZnh0ZlI0OC1TbFE4d1gzMGcxa2RzME91UUpLU0VFbUd5ZzdqVk5jazJYQkN2MkNwYVctRWJvS3ZyV1lrM1BQaEZ3WGc9PQ==
Z0FBQUFBQm9IVGJCSHk0YnhzYVlBckszTDZieFdCaDRDMnF6NGZSYkc5b3NvSkVrbFBmTmVZdTNvMWt2OWZUMUstekxzYXBnWDBXQW1CVnFET2d3dC1LS0pwR0xjZmZhdlFINUNiUzZsZG94MGZ5dHVxNWxYQkpxazU3NjdJOHZLb0xpLUNKcFNFMGxJRmJfZzhac3VFdVB5dTVaWkVhM3U3VVBQS2MzWVlQRlhfVmYwN05rdkZaUzdKSjhVa0FrTVIyOVdVYlBTMjIySDl3RlFCQl9oM25KX0ZOZnpTZC0wdz09
All rights...most importantly, of course, the right to life.
r/aiethics
comment
r/AIethics
2016-07-23
Z0FBQUFBQm9IVGJBMG9QV2VmTzhSRHZVRVN5QkUxMHVtbmJjTlh2VmxNV3F5d0FBWGNiX0pUQXJUVW5VSHI2OU9Md1dwMXF5eFJKc19HNzJHZW1mM1lKZjcyREFjcUduZkE9PQ==
Z0FBQUFBQm9IVGJCbTZDNFBMTjNDVEs3OGZoQVNvN0NYek8waEpVdFEwN0hkUl9ESV9MRTZxbkQzMTJtS2tLRkxQWjF2aDVIVmhWcEV6eHpYMjludWFXUmt2ZWkwQ1Qzbnd6RmVqc09kYUoxRWl3cGFSRXJrZDF6N0RNOXNOaG05TjJuekNzbjJOT3RmNDE3Nk5kWjNDZXJVQUdVbF84aEpFQmgtY0xYX2xmQV9DZVZnRFhYX3hTeUJWNzczS1h5M0lEcC1WZ0NfUHhzOF9LMVZzZkhjNVZqeTdLM3ZPRk9ZZz09
Link changed after spelling correction (thanks to yours truly): https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/
r/aiethics
comment
r/AIethics
2016-07-29
Z0FBQUFBQm9IVGJBcW5xYVJuWkhwUVFqY3Fyc0pBa3RlUmRZQlJObUFhUU05aldqdWtGUTBDZ0hYTlVHUDEzU29ZYlhJX1hPSGxsZkctb1V5ekNHVHR0ZzFOZGVQM09xU0E9PQ==
Z0FBQUFBQm9IVGJCTEdvdUpnTjAybzduREVHMzBMX0t6NVpmaTFuZlJPaXV3cUtGTVpuVXRwWTNucURqQlQyRHpPdDlSY0JuRklRZk9Pb28zZF9NYmVWWGV4em9BNHl0S05TeFVvQlJTUThhMGhqVUR0eXh3YUhrNWVEeXVBR3FIeEw5TXIxMzduR3NEMWxTSFg0TXZZMUc0b2FvUDFHVUVfZE15VkxFdUE5Qk1HTXlfQmlNSm8xWVNPQ09MQWdPQUtLZ3NJV0wzLUllWkhHdTNxZmowNnVveXpWVkpqSUlLUT09
Hmm... I answered a similar question over at quora: > *At what point does an AI become intelligent enough to warrant "human rights"?* My answer: > Zero. > Artificial intelligence is a non-biological creation of humans. > It therefore has as many human rights as a pair of scissors. I really don't understand what's so hard to understand about that.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBeGFpVWN4MmppQmhMckJNNnAzX2RRWTJtQ0t0N3MtbmgxeTNHT1BwdTFXNXpQeUJHV2lIMzRKb0pZV2pfWWhOUHM5UUdIbVZRd3VhZU0xNVdQekp3bkE9PQ==
Z0FBQUFBQm9IVGJCQzkxblRmNmtSektsc2Zqd19rU1lBQTR5NGZTMEhCWVJEQjRwWHpSbGlPNjJNREhPZldSM3ozMEZ0dFBvdE41ZWxENmkxU1NSZEpSaVpHU0xpQW1YckFrRGdMeVZxSGJ4TTZQUnI5cmRvTjRDMGpJbnhrU20yclJ4U19PWEVKNnZ4Z3VualJ2WXpQcnkzWWxxQUZ1b0ctcjhKWlNSeFdWMHB5WHNrbEZvQmxMS0JuUGMwN2FnOHBUVy1mVTREVTNXWjhxbHdOZ2hMVFpYSEF1OWp1YnpuQT09
Well, how does >It therefore has as many human rights as a pair of scissors follow from >Artificial intelligence is a non-biological creation of humans. ?
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBTkFubDltRVo2TzRDajRQNTFFTXFGV3NDdUlneElEM1ZDbFFNdXlvdTl5bDZmdFpMSlIxZFRnUXJaQnE5aHpaVm41Ui1tdWp5ZTdmaFczUFBKNFR5SFE9PQ==
Z0FBQUFBQm9IVGJCU2F1b01QemJPUjVjX2RYcHN4Z3d6ZVRqbi1GckFnSll5U2RJS29xeElMaE9zMzFJT1poVDRlX3hsUFVCRUdmbjd1eGozWUQzSEJ1TE1vSGR5REQ3OGdqVUJHUnM4cExTckJaeHoxSzJZTVpQVzRlWGRmQkJQV2c1OFBOM18tUG1MS0M1Szk5RE9FY1kwTVpiMUQ2QUYwNEtqUDBQVjNMYzlvdkhIX3d0MjM4a0paaElyaDRTZUZnOGpORFlzajRaZnZJbGp0WnFnM2VHeFlQMzVXUy1Tdz09
Hi Umami. What I'm saying is that because AI is not a human, it will never have (a) human rights and (b) a need for them regardless of how "human-like" it becomes. My computer, for example, is a non-biological creation of humans. It therefore has no human rights. It additionally has no desire for them. I, I... I don't know what else to say.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBQWNMWHRmYVhrMllrcFdIekw0VzZmQkJqSlFvWnUyNzRVN1hUckpEM1R0S0FFd05fTENMWDRmR0hJVy1ENENMV0JjR2xnNmd0Z0xDREtZYlppUHZXZXc9PQ==
Z0FBQUFBQm9IVGJCY2l4aTJYcjd2bEdyTGF4YUpFOXpQUElZc3dPN0JZbVpqS3lobjRxNEdwQ0N4RnZsTUxLR21JNFdXWlJlMmktUk54b0JPTXZoOVhFc1lEUXhmUkNuSGxoRTc5dVZDa2l3M1hrZlM5RF9pQ1lCSzVnbjRJSGhsZ0xmTDFlckFJSU9jUFRHY0tjQ083dGNVdzZVZTVWOWcta3hMVnFtb0MxWTkyblkyNUNUd25LVXRxSl9leVlKcFhQcU9wQmI1ZXA3dmdVaXZldzctQ05ndGpXYlNla0Vidz09
>I, I... I don't know what else to say. Well, there's a whole host of issues that you could flesh out. Why does being "non-biological" matter? Do humans matter simply in virtue of the fact that they are biological? Isn't it irrelevant because many biological organisms don't matter morally, and some of those that do matter more or less than others? Moreover, what does biological even mean - would a whole brain emulation or a neuron network in a lab count as biological? Why are "human" rights the only kind of rights worth talking about? Which rights are we considering "human rights" and which rights are we not? What are the reasons that we extend human rights to humans, and why can't those reasons apply to any AI agents? Why wouldn't AIs ever need human rights? Couldn't we build AIs which straightforwardly required their rights to be respected in order to function?
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBajNIQ0RZanN0REY3WEwxSE1nZ1JDQUZEY0hGQVZhczRPcWlEVkQxNHlnaG91NnhIUHdvempFTjNXeVN0ZWtfWFU4QzhNeUZBOGZqV0lVWjJ6cEFLN1E9PQ==
Z0FBQUFBQm9IVGJCSENhZ3RwbC0xYjlhNnp4ZHNWcWN2cDQzMi05cWhKRURXQVQzc0NmTkNxUFltM2dlMHhYYTBGMjV2SldydVNNV2ZEakRQaThGX18wOHNVdzVqTXhDczBRZlpUUkJpQS1rdmtQRFlJS0RqcFVlVTdXSk9OLXpVSXBNU1hwX1RWUmo5OGU0VjU1ZU1KRkNqQzZicVhQLVZ5U2YzbjZMV2hBWENOQWtDMEtnRkhqMXJ5VzNTSDhyNjdrYlhJaWhDOFFMNjRDRHl6RlBVZC0zZTJ0TEJtRHlBZz09
Listen Umami, inanimate objects do not *need* the things that living entities need (food, shelter, water). So they will *never* have the desire to demand them when they're not available (especially via, "rights"). My box of envelopes, for example, will never, ever, ever demand rights to anything in or outside of my home. Nor will my speakers, area rug, or fireplace. Not even my mobile phone! And if you think a user agreement or terms of service is a demand to a right to something, then I suspect you may not know what AI really is. Here's a hint: It's a trick. And based on the number of people who may think it could ever want, need, or demand rights to anything, it's the biggest trick of the century. People are going to have to remember that AI is nothing more than computer code built by humans. And if you encounter an AI agent that starts demanding things based on a "right", you need to have a talk with its programmer. You're probably being targeted in a scam. It is utterly ridiculous to project a human need or desire ("rights") onto computer code. Nevertheless, when I subscribed to this subreddit, I thought it would address the ethical development and application of AI. I'm afraid I was wrong since most of what I see so far in here is pure fantasy -- a literal comic book that hasn't been paginated yet. I no longer know what this subreddit is for, so I'm out. If you want to listen to me rant about other AI stuff, hit me up in r/artificial or r/controlproblem.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBaTk5bXItaDJOd1BtTW4zYlBmTHNvSVI4M1Vtb0RTRGJWYTZ3bm1ISWJlRVhYRFEzNzFJXzR5ZEFLRmh4clRFVEgxSVBOX2hGUUJORTY2RFdYazhxTHc9PQ==
Z0FBQUFBQm9IVGJCV1lCay1mV1R4TkxBU0R4SVU3c2VVOTlCTU01Xy04WUZ6MGJBWlp0VG5RNWVXSXF1dndiZkZDMWppaGtDRW8yLUtuOTM4eFkxSlgya1N4V3dySWM0RjVLcnBzOWFWQmx4V2U3TW92UENRU1VtT0czYk02SnRuTnVaQmUzZGx4N2F4SHNIdzFMbWhKUlZHZEVtd0tHbmlEbWNfMlN1eVA5WW45NHBVR3g3eEdTTThvOUZXWFFnekY5WU9XeVZ6Z3ZmMF9xbXE2Vzc4MmdUNkRlOEdsYmdTdz09
>My box of envelopes, for example, will never, ever, ever demand rights to anything in or outside of my home. Nor will my speakers, area rug, or fireplace. Not even my mobile phone! Right, but we're not talking about boxes of envelopes, or speakers, or area rugs, or fireplaces. We're talking about sophisticated artificial agents which might not be satisfactorily described as merely "computer code built by humans". Note that we could just mirror your entire argument by saying that humans are merely atoms put together by evolution and will therefore never need rights, because amoebas and RNA synthetases don't need rights. But that's clearly absurd, and likewise, it's just begging the question to assume that all AIs will be no more morally significant than boxes of envelopes. Moreover, as described by the above article, we don't always assign rights merely because we think they are deserved, but we often assign rights to undeserving/abstract entities (such as corporations) because we find it economically and socially beneficial to do so. >If you want to listen to me rant about other AI stuff, hit me up in r/artificial or r/controlproblem. If you don't think a few years or decades ahead to the time when AI systems make serious ethical decisions (or the present day, arguably), then I don't know what you're doing in r/controlproblem. We run that subreddit for the purpose of discussing the development of extremely advanced systems which possess human or superhuman capabilities across many domains. And we are likely to be worried about AI ethics well before then, because (for reasons which I won't belabor here) ethical decisionmaking in machines probably isn't an AI-complete problem. In any case, this subreddit is new, so defining the focus of discussion essentially comes down to users introducing content that they find interesting. There isn't much restriction on topic at the moment, though you'll note that I've introduced a fair share of contemporary issues including autonomous vehicles and the use of a police robot to eliminate a threat.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBNURpbUQyeFk2Y0RJekRIYndNOHFyM0wtOVNlM3prUU10akNyWWE5NUlieTlOZjFESWx6amtQSkExUVVsWE5xYllkc255Z2pEbjN5UzFnWThLeE9uaVE9PQ==
Z0FBQUFBQm9IVGJCd2RKSkF3TWxVNVNNQTd0cFd6TXNld01reWt1d2hQWnVQWmN2TjgxZG1Ydk5mQXRQUjlHbVRWVC05NlA0VnpXTlotRDJ5aXR2SlFra3lNcnRrUC1fTm9jbzNteXdyME9XNndDWWJjYlJaczhMZDFJLWQzOG1kN3Z5QVhJLVhOUHFzYlZRVThLMGhneWhzM25UWUoxTERqX09DZHdfXzV3UVJfMG9CQXdHLWk5T2dCTkRKRHNPZ1pPOTN6Y1dkM1ZrZktObkFUZFg2MDZqc1BRVm5MOU0ydz09
Okay Umami. Can you give me an example situation in which a super AI agent would need or want rights to something? If you can convince me that could actually be a plausible scenario, maybe I'll hang in there. Maybe you'll open my eyes to something. If you feel so inclined to do so, would you also describe that scenario while keeping in mind that this super AI agent was built and coded by humans? And that all it's code came from humans? And all it's "learned" code came from structures previously put in place by humans? Maybe just remember that it's a man-made thing? I'm picturing Data from Star Trek. Or that other guy in Aliens... who was it??... Ah, yes. Bishop!
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBWXptT0k2OW9DVjM0Ql9pdXFnYnNBMmtERG1CWld2MFVTN0xkMDJzRFFLVGwxa3dRYlpwaTN6TXo2ZzVGN2lNMFBiVThGM05qZWlKaW4xbmFqMHlxT3c9PQ==
Z0FBQUFBQm9IVGJCd3JRVTUwZHBDZlRuWWJVYzZyMTdHRFpGdXJuNldrR1FHalU0ZjJNMGF2SW1ybzI5eXZzTmtWVG8ydFJqTmRKTHdPUGJVWlRCenhQSEwxZ0swbzVaTnNMZEJrZ0xnZnZOUW9MZXpfbEh4RlZCY3JBLVVIcEFoQVZZdlFBT0Y5a05IcEFfOThSUUNQeEcwc1paRkFValltbC1RVDJRalpHVDhuYThOLU5PaU1adFJBeVN0NXdnUG1CUlBBSWpfeVFaMzVEUFlwVTV2X21XZlZZVkgybEp4Zz09
> Can you give me an example situation in which a super AI agent would need or want rights to something? What if it's an agent used for business purposes and is conducting its own transactions? In that case, shouldn't it have protection against fraudulent practices and scams? If it's an embodied agent, like an autonomous vehicle, shouldn't it have assurances against being willfully damaged, destroyed or neglected? For the most basic and practical example: shouldn't a self driving car (with no occupants) have a right not to be pulled over by the police without reason? If they don't have these things, then they'll be less efficient and less successful in their roles, and people will take advantage of them. On a different note, if it's a reinforcement learning agent that's sufficiently complex for us to believe that it might be conscious (https://arxiv.org/abs/1410.8233), perhaps it should have a right against being placed in scenarios which cause unnecessary suffering. And complex agents will generally have reasons to desire self preservation and resources (https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf), so maybe they will understand and desire rights to non-deletion and property protection. Now, maybe some of this is contentious, or seems wrong to you, but that's fine. I'd like people to discuss their ideas here, even if they're different from mine. >If you feel so inclined to do so, would you also describe that scenario while keeping in mind that this super AI agent was built and coded by humans? And that all it's code came from humans? And all it's "learned" code came from structures previously put in place by humans? Maybe just remember that it's a man-made thing? Of course it is, but is that really important? What if a human was built by humans in a laboratory - if they had the same consciousness and feelings as the rest of us, they would still matter. And if an AI somehow wasn't built by humans (suppose it was some stochastic process like a monkey on a typewriter) it wouldn't necessarily be any more important than an identical AI built by a human.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBN0dIS2pVc0ljMC1PNTRzRTlITHZKX19mUjlUdGI4ajV1QmhlSXk5RVhiNWNQWVJPWFpqVkVjQ1ZGYkVGS0F0cFYtaVVfTEtHWFZUWGwyNDFTSGFsb1E9PQ==
Z0FBQUFBQm9IVGJCN0lBT05uYXY0Q21NeEhIMk92SVRGUWQ4TC1TVHBxRHdDOUtmSnNodDFGYURMckFIdWljeEp2ZjhVaHQ1LVhSNzdBeE1UdHdfN1E5LXY5RlhKTmlDVVhhXy0wTzN0X3pSeFFCUVlJSEhqaG9YcUU0SERDMUN1YmI3ZGF6UkdjNTVCQ0Z1MG5MRVNydEZIb0lzcFJoQV9iNjVvdk5mb2w5dDctbV82RjVHZEVMZlUza0pDRWlnTDdrd1FlYmFfWl9RRUc0bzZsWFh4bmUyU3hrSWdEeV9UQT09
> What if it's an agent used for business purposes and is conducting its own transactions? In that case, shouldn't it have protection against fraudulent practices and scams? If it's an embodied agent, like an autonomous vehicle, shouldn't it have assurances against being willfully damaged, destroyed or neglected? Um... those aren't rights. Just what the heck are we talking about here, Umami?! A right is "an abstract idea of that which is due to a person or governmental body by law or tradition or nature." (WordWeb Pro) Only humans ask for, need, and get those. They were invented by and are used by humans only. Outside of human-world, rights don't exist. What you're describing looks more like regulations or laws. And I *strongly agree* with you that those things should have the provisions that you described. But they're not rights. They're just requirements, guidelines, and good practices that people should follow when building their creations. (Hopefully, they'll soon be laws.) > On a different note, if it's a reinforcement learning agent that's sufficiently complex for us to believe that it might be conscious (https://arxiv.org/abs/1410.8233), perhaps it should have a right against being placed in scenarios which cause unnecessary suffering. An AI agent shouldn't feel any pain. Pain serves no purpose in a robot. Neither do physical feelings. If you think they do, I will ask why go to the complicated and costly lengths of creating such a robot when regular ol' human reproduction would suffice? I'm going to politely skip the last part because I really just wanted you to answer the question from that point of view, which you sufficiently did.
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBb1hGcHBiZ2w0YWdhMmQza3lYYUtSRnhWTmo5Y1B1RXpqT0EzazhPTEZtOVdkQkFmd3ZBNl9jbVFsbkNiTEQ0R2JyVE9lQ1VyOTNJdmJ1Y1pHbjZkNGc9PQ==
Z0FBQUFBQm9IVGJCRUU3X0N2NmxNU0NkSlB4TFZRUm51blNtTmVtZW0tWTlzdE5vS2k0Ml9EaVBHMWxYQVQ4WHJwTW9LSHA0Y1RvTy1jaTU1NmNFS0pPMkpDTnBpNjJSSEtubzQyVTVOSWlkYm01SWtCV2J6Um1Wb29ZWmRkLUk5aUlkM3V1VjczbG4tNXIxNy1taEZvbDlOMmVueWRPbm9qekJDTWtFeTFMOW5OQ0xnTnFMblBJdlp2dkdOQ2VVaGN4dTFEQi1UU25GT2dONml2N0d3dldtZ1dLMU1XcWFBdz09
>Um... those aren't rights. Just what the heck are we talking about here, Umami?! A right is "an abstract idea of that which is due to a person or governmental body by law or tradition or nature." (WordWeb Pro) The article mentioned the financial and legal rights of corporations as examples, so those would be some of the main issues which are worth discussing, and those are as legal and regulatory as things get. In that case your gripe probably isn't "this article/topic of discussion are stupid" but "this article/topic of discussion are using the wrong language", in which case I'm not bothered. But normally we think that if someone steals my credit card number and empties my bank account, or if the police detain me for six hours without cause, that my rights have been violated. In any case, I pointed out how a wide class of AIs would desire a right to self-preservation, so I don't think the point needs to be discussed further. >An AI agent shouldn't feel any pain. Pain serves no purpose in a robot. Neither do physical feelings. This isn't obvious (for all AI agents), nor is the implication that AI agents wouldn't feel pain even if there was no purpose for it. What's your view on consciousness? Are you an epiphenomenalist?
r/aiethics
comment
r/AIethics
2016-08-03
Z0FBQUFBQm9IVGJBcm5hYnBVVXNTRng5bmtRZWU4WThwUHRWYWJncWhMZjk1UzE4WkZ5NjhjUnF2TG9NbHJpR1k0ZWRsTTdET0c4bW1LSkRuZGMzRERPUlUwUFh1YXJBeVE9PQ==
Z0FBQUFBQm9IVGJCYWJBTFd5THkxNXlFb0VOTnUwRkF2UmxTV2U3aWYySlJhdkxxdHpzdFRQYmtJWjh0NmtsRTNua1lHQ0ZIT3dQR0Y0Uy1lT2tlWHVMUmRzOXNWWDlLNWYxS25udGhNNDF1bTV4aTYzV2Utc010V191dm9OQ3NtUnlCVS1KWlZ3aUE2LUloRlhfRnh6dWVFQjJWVWJsdzhMaTlURzM1WXQ3RlRJcV9yNHltazBDdWZCRTNUTl9BcVBKS2FYVk9ENm95dzNlY0ZVaThFTjlMc2tFSTcxbmk5QT09
> What's your view on consciousness? Are you an epiphenomenalist? My view on consciousness, as far as AI is concerned, is that it's a necessary component (assuming consciousness is nothing more than awareness of self and situation). But no, I'm not an epiphenomenalist. The human mind and body is a terribly complex thing that gets its information from a variety of places. I can't count all of them, and I can't label all of them. But I know from experience that there's more than one source, and I wouldn't be honest if I accredited my experiences to just one source (thought or feeling). Separate from thoughts and feelings, we've got dreams, intuition, premonitions, and innate behaviors to contend with. And those are just the ones that I can think of. I don't subscribe, however, to the notion that a conscious being is alive or has access to "rights." And should AI be coded to produce dreams, intuition, premonitions, and even innate behavior, that doesn't change my position. AI dreams, intuition, premonitions, and even innate behavior are synthetic.
r/aiethics
comment
r/AIethics
2016-08-04
Z0FBQUFBQm9IVGJBV2o2Ym9kZWpTWWoyWlFoYW80TGI1OGM2SmR6R2Jfa0gwUkJPeEs0dHZfZVMwYkRYYTUxaDFNRnU5dFZaZ0x1akF6bnhlMGNtYnpKRld0UGdENjBzbmc9PQ==
Z0FBQUFBQm9IVGJCakFiVnRCZ183MWpfazNQaEtOMUgydmFxR1J2MG5LVGJXNWhyR3NwU0lhd0JNc1FLR2hmWWhCMWdTZ0xFa05DX2VmVldrblVKamQ2NjFOLTdiVWN3OHhvNUdKWnpJeHN6Q0FXQmFQZFo0dVhyVHBueXpySE0zWjQwbmVRTUtlSV93TTlXNjlacENfUXpuWDdXLUtiQVNxcDUwaTN0LXNwTWlyOGI5WGVoSjZiNmJFQmZJVHJpdjlEcDZkbkItQzRnT3FyVGpTa2J0c2JGS3Y4LVg3ejVQdz09