text
stringlengths 1
39.9k
| label
stringlengths 4
23
| dataType
stringclasses 2
values | communityName
stringlengths 4
23
| datetime
stringdate 2014-06-06 00:00:00
2025-05-21 00:00:00
| username_encoded
stringlengths 136
160
| url_encoded
stringlengths 220
528
|
---|---|---|---|---|---|---|
>Of course it is built into your neurons on the neuron level.
If that's true, you should be able to find and isolate a novel-appreciating neuron. I think that's pretty obviously not possible.
>That's because we don't know everything about how the brain works.
Exactly. Knowing how a neuron works is *not enough* to understand how the brain works.
>and then make it really good at computation, time series analysis, oneshot learning, natural language processing, and a bunch of other things that all kinds of AIs are going to do.
Does that really give you a super AI, though? We don't know, we don't have a super AI yet. I'm skeptical that this approach will work, or at least that it will work any sooner than other approaches.
>Nothing in the methods of programming goals and constraints into AIs relies on an assumption that the AI isn't smart.
Well, it's more like the other way around: AI isn't smart yet *because* the methods we currently know how to use, which revolve around strict goals and constraints, are not powerful enough to do what is necessary to make real smart AI.
>We are talking about AIs in virtue of their ability to make high quality decisions; real moral reasoning (whatever that means) isn't necessary for that.
It isn't? You seem to be making some sort of unnecessary assumptions about what the role of moral reasoning is. I mean, you could just as easily claim the same thing about any other field, and it would probably be just as wrong. | r/aiethics | comment | r/AIethics | 2017-06-06 | Z0FBQUFBQm9IVGJBWF9paThWLU1VSWhIeXp0VnlURW9TdzlvbEVrUTRvMkhmM3NtQV91aTd1UWpkLXhkbEJJdnJsbXF6SU9hNzJCLVFqcDk4QmhXWDhCY0NJMjY0VDJhVGc9PQ== | Z0FBQUFBQm9IVGJCS3JhYnI5eW9Xc1IwTkZoYzFtNjNoWHVibzVsT0Y0RkNxR2ZwYWpkRTdwUG9zVzN1dWtjY2FIR1ZHdlZxdjRqdXZNcnVMNFJiY01jZndXbUN6QkwtTUotanJDd19BR21TUUstaWJkejg3NENxdnFYSnJzSjFBTFFZR0FqSEY1QURtamFpM2tmdS1JTElWX2xpR2dDOWdOc2pJNXJKR0h0Y3gzQkZmanlTb3hSaVZhSzVkS25JZkJjLVpQVVZ0MXpOdWdGWEZHci1GU1h2VzZJYW9jYWVUdz09 |
>If that's true, you should be able to find and isolate a novel-appreciating neuron.
No you shouldn't, I said it was "built into your neurons on the neuron level", that doesn't mean "there is an isolable neuron that does this."
>Exactly. Knowing how a neuron works is not enough to understand how the brain works.
But I didn't claim that knowing how a neuron works is enough to understand how the brain works.
>Does that really give you a super AI, though?
Sure you do, if it's good enough at computation, time series analysis, oneshot learning, natural language processes, and a bunch of other things, since by definition it is a superintelligence if it significantly outperforms humans at all kinds of tasks. This is trivial.
>I'm skeptical that this approach will work, or at least that it will work any sooner than other approaches.
But I didn't say anything about an approach to AI. I described a bunch of features. What approach you use to get those features is up to you.
>Well, it's more like the other way around: AI isn't smart yet because the methods we currently know how to use, which revolve around strict goals and constraints, are not powerful enough to do what is necessary to make real smart AI.
No. AI isn't smart because we don't have generalized learning algorithms, reliable oneshot learning, and a bunch of other things. And our methods don't rely on strict goals and constraints, they just use them because we prefer good versions of our programs to terrible versions of them. As an example, you could replace hyperparameter optimization with randomly choosing your hyperparameters. The actual methods in the model would stay the same. It wouldn't be any smarter; actually it would be a really shitty model, because instead of telling it to be accurate you're telling it to do whatever, and that makes it shitty. But I'm not sure what else you could be proposing.
>It isn't? You seem to be making some sort of unnecessary assumptions about what the role of moral reasoning is
I'm not making any assumptions about roles for moral reasoning. I'm telling you what is necessary for a machine to make high quality decisions.
>I mean, you could just as easily claim the same thing about any other field, and it would probably be just as wrong
No, it would be perfectly correct, since whether an AI makes high quality chess decisions is different from whether it does Real Chess Reasoning^TM, and so on for all kinds of other subjects. But even being able to make high quality moral decisions is not necessary for a machine to make high quality decisions generally, since our most common interpretation of making high quality decisions is the ability to shape the world according to one's preferences, not the ability to do what is morally correct. | r/aiethics | comment | r/AIethics | 2017-06-06 | Z0FBQUFBQm9IVGJBdkFZOGdFVmlhZk5Bc181anJfY2d0VGdsMktBYUJjMllKcW42aXBnVlJPNGd1UTQzOERNUHd2d0xHUTJlTzJYemZCX2M4eWN1NXJmMlU1eEZTWFdLYlE9PQ== | Z0FBQUFBQm9IVGJCR3Qya1R0WnIwWTdwU1pnWGl2M2FHb045YlF6Y2FHa1h6cWNzSlpRSHVlSWlQY0M0d2tTM0JobURCbjlMNDM2bHhZcFVEdWhLd283dEhFa2tiZmhCUl9IOG4xaTBhQko4V0Y5T2xKbnhKQ250RHpMZE5EbHlRUXVKVHpZUE82N2c1dmtkTVh5ZUdEbVczUTlGUWJjZGh1WlRQMVFkZ01wZENXM1piLWZ0NG9fa2JuOHg3V1BUcHVIRnEwSVdlT0lWdmpNMXBvTTNfakp3ekVhNGJOc0dXdz09 |
I think they turn them off In Tornadoes. Normal wind is sufficient. | r/cleanenergy | comment | r/CleanEnergy | 2017-06-08 | Z0FBQUFBQm9IVGJBOGQ1alpNeUJxUjg1My1TQkh2aFM0SnBMeGxVa1J5UjhIVG5iSGo0RndyRGlXMTFQOHgtdkZKaldYd1hTcGdIdFpka2FHSXA4Umtyano2ZkQ3M1RHcWc9PQ== | Z0FBQUFBQm9IVGJCdnVhZ21xWHFHbGVyc1VhMnY3RFBDRnZrTmU0UUdnck9TM01ZdS1wdnFmM21vOU1kSXZhVkJuUm9SX0oybnFjdk9hYTlUZ0JRZHJBVFFwMUM2eGpZdTBweGxKSU1MZjJ0dGZONnJpZ3ZRSV83eXhoeTNCUUNOamVEYVEwYWh2YldoR05yZXExYkJfbFFaMGtwNzFRTzNrSEE4QnJZX0M4NkZwcVduX3E4ZVdscEcySlVtSFZNVkxNcmM1M1UtcWMx |
>No you shouldn't, I said it was "built into your neurons on the neuron level", that doesn't mean "there is an isolable neuron that does this."
Then what *does* 'the neuron level' mean?
>But I didn't claim that knowing how a neuron works is enough to understand how the brain works.
Then why shouldn't the same principle apply to human-level (or superhuman) AI?
>by definition it is a superintelligence if it significantly outperforms humans at all kinds of tasks.
But humans aren't just separately good at doing separate kinds of tasks. We are able to integrate *all* those tasks and creatively plan how to use them together and foresee when they'll synergize with or against each other. And that's really the important part as far as being intelligent goes. There are plenty of animals that can outperform us at many individual tasks, but their ability to combine those tasks creatively is terrible, and that's why we own the planet and they don't.
>AI isn't smart because we don't have generalized learning algorithms, reliable oneshot learning, and a bunch of other things.
And yet you claim to know what those mean and that they can be achieved in some way that is consistent with the kind of strict low-level goals you're proposing?
>whether an AI makes high quality chess decisions is different from whether it does Real Chess Reasoning
Even extremely good Chess-playing software is often terrible at certain kinds of Chess problems (ones that tend not to arise during standard Chess games) that even amateur human players can easily solve.
Could you design software that efficiently solves those kinds of Chess problems too, still without doing any human-level reasoning? Maybe. But it doesn't seem plausible that you can just keep expanding the problem domain forever and keep solving it efficiently with 'dumb' algorithms. I mean, if that were true, why did humans evolve conscious, reasoning minds in the first place?
>our most common interpretation of making high quality decisions is the ability to shape the world according to one's preferences, not the ability to do what is morally correct.
But maybe doing the former *really well* implies doing the latter, too. | r/aiethics | comment | r/AIethics | 2017-06-08 | Z0FBQUFBQm9IVGJBQUxyS0xJVjhScjV1X3VXcEU1RExITndEdVEzT3JPajI2cFBfWHpJMk9oWmVoaFBsSkRoYlpGa29fZTZVcmJRd1RmRnlkZnBXSzNmdE1XcEJEdGE2ekE9PQ== | Z0FBQUFBQm9IVGJCTUdMMGlnTTgyYkRYM29lR01RSzViVWthQzZjNy1lZk0tUURyT2NtYmkxVTBLZFhzejBDMl85clo4dXlwU3VlZ0hWUlBvS1d6RG0wUFJHYjEwN090cTM0NGV0MlZuRkozY0xEMTZVQ3Rjb2FiSUVYX2VWcWd3TVVuelBkbHEyTWVkN1lBdXZId3JZUHZrRzZFaFVJeURpeVlERHZDNlhvc05ZblRLVTMzQzZWbUtIdHNRNlhwSnBfN0Y0NHpGQ3RkREhzOE9JMk4tMnk3SEFrUnRPbGpYdz09 |
>Then what does 'the neuron level' mean?
That cognition is grounded at the neuron level means that the neurons, the way they are connected, and similar physical features of your brain determine your behavior and thoughts. More formally, there cannot be a difference in behavior or cognition between brain A and brain B if there is not a physical difference between brain A and brain B (supervenience).
>Then why shouldn't the same principle apply to human-level (or superhuman) AI?
What property? The property of a mind where knowing how a tiny part of it works doesn't tell you how the whole thing works? Of course it applies to AI.
>But humans aren't just separately good at doing separate kinds of tasks. We are able to integrate all those tasks and creatively plan how to use them together and foresee when they'll synergize with or against each other.
Cognitive executive planning is a cognitive task just like all the others, but if you are going to insist on this stipulation then just amend Bostrom's definition of superintelligence which I used two comments up to accommodate it as yet another cognitive capability alongside ordinary tasks, since it doesn't change the main point.
>There are plenty of animals that can outperform us at many individual tasks, but their ability to combine those tasks creatively is terrible, and that's why we own the planet and they don't.
I don't think there are any animals that can even match us in the main cognitive tasks which we perform, such as speech and language recognition, grammar formation, executive planning, intuitive theory of mind, abstraction, topical commenting. At the same time, chimpanzees wouldn't be able to dominate the world if they learned how to amalgamate their social bonding skills with their termite stick skills or something like.
>And yet you claim to know what those mean and that they can be achieved in some way that is consistent with the kind of strict low-level goals you're proposing?
I do know what they mean in general terms, and I do think they're consistent with strict goals.
>Even extremely good Chess-playing software is often terrible at certain kinds of Chess problems (ones that tend not to arise during standard Chess games) that even amateur human players can easily solve. Could you design software that efficiently solves those kinds of Chess problems too, still without doing any human-level reasoning?
What do you mean by "human-level reasoning"? Do you mean "reasoning that solves problems as well as humans do" or do you mean Real Chess Reasoning^TM with consciousness or whatever additional philosophical criteria you have in mind? Or do you mean "reasoning that looks and functions similarly to human reasoning in terms of its capabilities and behavior"? The first is of course necessary for human-level AI, the second is not at all necessary, and the third is simply a very small slice of possible-mind space.
>I mean, if that were true, why did humans evolve conscious, reasoning minds in the first place?
We evolved reasoning minds because reasoning is the process of deriving good beliefs from perceptions and good actions from beliefs, and that has many evolutionary minds. We evolved consciousness because the physical structures which facilitate reasoning happen to be conscious. | r/aiethics | comment | r/AIethics | 2017-06-09 | Z0FBQUFBQm9IVGJBbHJ6azJJR3d1UzhwUjB6NEJtVHNqSi11UFhmcVVieXBpM3pvRGZySWs5MzRIbTJPb0w5Q0Y5U01ILTRiNUt4RjktVVNGZlNmeXcxbE44OFpwOEVabUE9PQ== | Z0FBQUFBQm9IVGJCSVEzRW1kWVdyT1FFUmRfVHRVeWd6NEtmbW9wOVFaMHRhVjVQdzJ6clZyMGhNWGNWcXNfbmsyNWxnLVFVWmRxLURCMHp0UWdnQld4LVY3R2RnSnFlTkVuWWlBZmIwMnhMYmM3NUhkamRlYmxlMVZFTFJwNWQ1LWV1VlVJRXdwM2NQLTZhdlYwM0JWY0tBWDdCbXg3Y0RQQjZCcUc4VHU4TzRXVmpQX0Q1ZnNZTjEwSHVKQzZHeGk4d2M2Y1hDZWxmQXFicERtTEV1VlI4bWpFR0tFLUdhQT09 |
>neurons, the way they are connected, and similar physical features of your brain determine your behavior and thoughts.
Yes, they do. Of course. I'm totally on board with that part.
Yet, nevertheless, understanding how a single neuron works, and manipulating individual neurons based only on that understanding, *is not enough* to give you control over what a person thinks or what kinds of things they want. The neuron can be understood extremely well while the entire brain, and the way it produces the behavior it does, remain a mystery.
>The property of a mind where knowing how a tiny part of it works doesn't tell you how the whole thing works? Of course it applies to AI.
Exactly. Which is why you can't expect to exert full control over an AI's goals simply by manipulating low-level 'objective functions' written into its source code. You would need to understand how the whole thing works in order to reliably predict what the effects of changing that code would actually be. (And in the case of superhuman AI, you probably never will understand how the whole thing works because by definition it is able to think in ways you can't.)
>Cognitive executive planning is a cognitive task just like all the others
I'm not convinced that it is *separable* from the others as this kind of statement suggests.
It is possible to make a very good Chess-playing AI that sucks at everything else humans do. But is it possible to create an AI that is very good at everything else humans do (notably the kind of creative planning and integration I was talking about) but sucks at Chess? That is not at all obvious.
>Do you mean "reasoning that solves problems as well as humans do" or do you mean Real Chess Reasoning^(TM) with consciousness or whatever additional philosophical criteria you have in mind?
The second one. But I don't think the two are separable in any practical terms.
>We evolved consciousness because the physical structures which facilitate reasoning happen to be conscious.
Yes, indeed. Which is why I don't expect it to be possible to create AIs which can do the one (as well and as broadly as we can) without the other. | r/aiethics | comment | r/AIethics | 2017-06-10 | Z0FBQUFBQm9IVGJBazEtUTFna0JMRzdEb0NodEQ5QTBtcXVaNEUzdWxmdENtWmFTeVJ1cDdrNjBBRUdvaXRDRkRxejBCaVZuaFJpUXdvaTZHZGxSZUlQSWNyQk5QUkIxVXc9PQ== | Z0FBQUFBQm9IVGJCYVBrbmI1YTBWQW5QYWM2VUxObHJfb0ZGXzNmallmNnpsa3huOEtvdDZvTk1kYmZzR1VGaGNBcEUxcFF1elBaTzg3M2NIMmt2eE91cGc1aW1HQTVTUmhtaGRHSG93QXZrZk1IYm1BV1p3R0o4N080ckRGZWJVRW42NFp2SmQwcTlPZDVUU2h4WW9GS3RncU5HMFZ5MGdHZ0FHazJKS19Hb0xHdW9mdUNHT1daem9nakxOWUhscHdUckh1Wlh4NkNKLWtGUUx1VGtmRXA5YWR1ZmplNmtpUT09 |
>Yet, nevertheless, understanding how a single neuron works, and manipulating individual neurons based only on that understanding, is not enough to give you control over what a person thinks or what kinds of things they want.
Sure, because humans don't have clearly defined goal neurons, whereas AIs have clearly defined goal functions.
>Exactly. Which is why you can't expect to exert full control over an AI's goals simply by manipulating low-level 'objective functions' written into its source code
No, because understanding how something works and being able to tell it what to do are entirely different things. I've run lots of machine learning packages where I told it what to do but knew fuck all about what it was doing on the inside.
>I'm not convinced that it is separable from the others as this kind of statement suggests
Then it's not separable from all the other tasks. I mean, it kind of is (there was a long period of human history where we had many cognitive functions but lacked executive planning), but you can believe it. So what?
>It is possible to make a very good Chess-playing AI that sucks at everything else humans do. But is it possible to create an AI that is very good at everything else humans do (notably the kind of creative planning and integration I was talking about) but sucks at Chess? That is not at all obvious.
I don't see what the point is.
>The second one
Well, as I said, that criterion is unnecessary for describing mind behavior, because of physical determinism.
>Yes, indeed. Which is why I don't expect it to be possible to create AIs which can do the one (as well and as broadly as we can) without the other.
So what? Then maybe AIs which are as smart as humans will be conscious. Who cares? That doesn't change anything that I said. I'm saying that AI conscious provides no obvious practical conclusions or explanatory power, not that it doesn't exist. | r/aiethics | comment | r/AIethics | 2017-06-10 | Z0FBQUFBQm9IVGJBeFo3V1BFbXNsMElnNjBkMDY2OTIxYUJNT2ZQelMwdnlGUF9PZFl0RGhKNWtPZlBrbm1QNjVCYll3U2d2REVzQ0pLRXhkWTAwdm4zSmZ3a0k5cW5ZUEE9PQ== | Z0FBQUFBQm9IVGJCNXhQQlFNb2V3T3QyQjlnZW15X0h6UlNtcUpXbFdHaFNlcVJMRUJNeUJ0cXlhcWU0SWlvY1IxZzZkdndfWXNKNkVhbUtHWUMyUmxnaU5aWi04aU02X042N0ZkNGtaU0d0aUJMaVlMYkpZUTl5SHZXdGY0M3duTmszU013ZmQ2d2pJSVRSY20xMnpoeEVlMGZDSDZ4WGMwMUg2S1RwZ24xcVIyR0E1emgxdzZfckozRUx5cWJQa2FHalMzLXRRUnd3anE0Z29lTDdvckdBX25JTUVjVEk3UT09 |
>Sure, because humans don't have clearly defined goal neurons, whereas AIs have clearly defined goal functions.
I don't think that's the relevant difference, though. You're still expecting the AI to essentially behave like a goal function behaves. But humans do not behave like a neuron behaves.
>I've run lots of machine learning packages where I told it what to do but knew fuck all about what it was doing on the inside.
You're still much smarter than that software, though. It may take a while to guide it toward the specific results you want, but you know you can get there because you can rely on the software *not* to be able to analyze your intentions and anticipate your attempts to control it. You're always a step ahead.
A super AI *can* do those things, and *it's* always a step ahead. Otherwise it wouldn't be a super AI.
>I mean, it kind of is (there was a long period of human history where we had many cognitive functions but lacked executive planning)
I'm talking about 'separable' the other way around. Not whether you can solve particular narrow problems without humanlike planning abilities, but whether you can have humanlike planning abilities without your ability to solve a wide variety of narrow problems being raised up by the application of that planning ability.
>I don't see what the point is.
Essentially, I'm using the ability to play Chess as an analogy for the ability to reason about ethics.
>Well, as I said, that criterion is unnecessary for describing mind behavior, because of physical determinism.
'Physical determinism' does not magically solve all your problems for you. For instance, insofar as we can build a physical implementation of a Turing machine, we can know that the behavior of the physical world is *at least* as unpredictable as the behavior of a Turing machine, which is already inherently unpredictable in the sense that you can't (reliably) predict its behavior without actually emulating all that behavior yourself. I would suggest that super AIs are fundamentally unpredictable in a similar way, such that reliably predicting the behavior of a super AI in a way that would allow you to exert full control over its goals is difficult enough that you would need another super AI in order to do it, not least because a super AI is presumably by definition capable of the same kind of universal logic as a Turing machine (limited by memory, of course, but that doesn't seem like a relevant concern).
>I'm saying that AI conscious provides no obvious practical conclusions or explanatory power
You could just as easily say the same thing about *human* consciousness. | r/aiethics | comment | r/AIethics | 2017-06-12 | Z0FBQUFBQm9IVGJBRkRlejIwbUp1TDIwQWo0VlgwQkM1ZFZuaGpiSDNCaHZfYW9PUE9zUjZJZVJTQ0lsX3hIR1dEYjIzcEFSTHpNT18yRVdtUXlpTENPWGRIVmNCQmI0RVE9PQ== | Z0FBQUFBQm9IVGJCb3dJOFkwS0ZGN0xvdVEzMFBqNkpjZ0QzLVZSUDE4cWR5WEFrVEJETVlvQzlDZkpuY1cyRFp3azR5RE44WFE0c3lJVk1BZTRMSGp3QjFvWHlYdnNIaVc1VjFWbFRUalNLSkhIdGJuSEl3c2tqTWxfNEtiZUJ5aXNRbjVwU3JfTlp3c3p5SmZ5NTR5MmkwZTdNeGtMU0VsMHUyM3FrNVQ4RnVmeWpvbXEwNW5JaHg4Uzd2anF0Um1zT2JCVXVNaTlxNXZLSXM1WnNvSlNRQVRIRS1HbnhpUT09 |
>I don't think that's the relevant difference, though.
Of course it's relevant, since it's exactly what we are talking about.
>You're still expecting the AI to essentially behave like a goal function behaves. But humans do not behave like a neuron behaves.
Goal functions don't "behave", and they are not analogous to neurons. But humans do behave in accordance with their goals - that much is tautologically true, so I have no idea what you could possibly be trying to argue.
>You're still much smarter than that software, though. It may take a while to guide it toward the specific results you want, but you know you can get there because you can rely on the software not to be able to analyze your intentions and anticipate your attempts to control it. You're always a step ahead.
>A super AI can do those things, and it's always a step ahead. Otherwise it wouldn't be a super AI.
But that's irrelevant, since intelligence doesn't make a being stop caring about its goals.
>Not whether you can solve particular narrow problems without humanlike planning abilities, but whether you can have humanlike planning abilities without your ability to solve a wide variety of narrow problems being raised up by the application of that planning ability.
Well I don't see what argument you have for this lack of separability, but I never made the claim that there was this sort of separability, so again, so what?
>Essentially, I'm using the ability to play Chess as an analogy for the ability to reason about ethics.
Then you're claiming that it's unlikely or impossible to have an AI that is competent in the physical world but poor at ethical reasoning, but this is irrelevant to the points I have made, so yet again you're not making any sense.
>For instance, insofar as we can build a physical implementation of a Turing machine, we can know that the behavior of the physical world is at least as unpredictable as the behavior of a Turing machine, which is already inherently unpredictable in the sense that you can't (reliably) predict its behavior without actually emulating all that behavior yourself
But I'm not claiming that you can predict all of AI behavior. I'm claiming that you can predict that they'll rather take actions which their goal functions reward over actions which their goal functions don't.
>I would suggest that super AIs are fundamentally unpredictable in a similar way, such that reliably predicting the behavior of a super AI in a way that would allow you to exert full control over its goals
But you don't need to reliably predict a machine's behavior in order to control its goals. If you want to control its goals, just change its goal function, and you can do so without being able to predict its behavior. I've done this.
>You could just as easily say the same thing about human consciousness.
Yes, and you should. | r/aiethics | comment | r/AIethics | 2017-06-13 | Z0FBQUFBQm9IVGJBWXdKQTFXaFR3a1VaYnl4V25tckp5Z1pjVmhLdXhXWUdQckxUWnNXMVZSeG1XQWVSNWluREd1NHpSUlZSTHB1V3ZHajg3QVBRNkdSRlp1dlJwUFNPV0E9PQ== | Z0FBQUFBQm9IVGJCWUFTYlozOElWYmZzanRnN3VOSjhMa0RLVldYSmw2ZWdrQ3VIQmVzdlB2VmJpakJ5QVpyYjRwR1FwelNJMlkxdUpCRzE2OVRXMjZLR1djaVh5VHltclk3cVdYTzl5YlFtV2NENGoyNmd2d2FHQnE3UUlicjNhVXhKbkJ6bldMRWpmZW1Wckt4R04zak9pS1YtQUdxRnFGNDdGcnVSYWRDQTNkdUdkaGZuMVlzWWJfMHBOWktDazlubnd4M3JPdkJUcUtraG4xWmpCRHhqNXFRRWNvdEVrdz09 |
>Goal functions don't "behave"
Of course they do.
>and they are not analogous to neurons.
You seem to be awfully confident that you know what they *are* analogous to. I think that confidence is very premature, given how little we know about comparable systems (our own brains).
>But humans do behave in accordance with their goals
Of course. But those goals manifest on a conscious psychological level, and we have *very little idea* of how they (or anything on the conscious level) arise from the interactions of the basic brain *components* (neurons, or whatever) whose behavior is well-understood. They do not 'contradict' each other, but nevertheless they are so *different,* with such a gulf of emergent complexity between them, that you cannot simply manipulate the components and expect to reliably predict what goals will be manifested in a person's mind just on that basis.
I'm just saying, I would expect to encounter the same issue with super AIs.
>that's irrelevant, since intelligence doesn't make a being stop caring about its goals.
And that's fine, if you're somehow confident about what those goals are and what sorts of decisions will derive from them.
But I don't think you can have any basis for that sort of confidence when it comes to super AIs. You just can't understand them well enough, or even if you can, that understanding won't be available soon enough for us to build the first super AIs in any sort of reliable accordance with our goal specifications.
>But you don't need to reliably predict a machine's behavior in order to control its goals.
But you kinda do. Even if you aren't predicting all the details, you're predicting something along the lines of 'the machine will bring about such-and-such results (unless stymied by some insurmountable obstacle)'. On a large scale you're still essentially talking about behavior.
>Yes, and you should.
So whenever we talk about our own consciousness and the subjective experiences that we have, that's all meaningless? | r/aiethics | comment | r/AIethics | 2017-06-16 | Z0FBQUFBQm9IVGJBdUo4dFBtYVBKRTVGOWtrR09zWWZ3QW5DZVhHZGkzWXRISU5pdG9NZlBFN1VnSUg5TEZvYXdIQV9qc09oWGRfYXNKWVRmX1ZIX2FldDRwYkUzbGhBakE9PQ== | Z0FBQUFBQm9IVGJCYUxYMXExYjhfc1IyUklMbHp1NmItalFRMUV6cGJNSTEzeHhJWUpHN1YwS2h5WGxUNTNXNlp5M3VQQ204eDhVbmhTUjJUcUQ1VXJ5Ry1jMmZ0bnN1cUxZY1VNS1RwZnBxbGJ6cXlTWUJwLUVMOUJPd3ZyWVcwcGpjem9BLUlLaVlmcUNxVmtxMGZOd0tHYklLdEVuM1dZM1ZaYzVhU1J2X2duRnVUeXlEeXRnQUhzNlJmTWJtcGx6RTczZ1hJVnlfdnJmSTZPY1g0UlNwSGVRcUgwa1dFQT09 |
>Of course they do.
No. They don't. What are you talking about? Agents behave, and their behavior is guided by their goal functions, but goal functions don't "behave".
>You seem to be awfully confident that you know what they are analogous to
They're relatively analogous to humans' reward systems, motivation and decision making systems.
>I think that confidence is very premature, given how little we know about comparable systems (our own brains).
We know enough about the brain to talk about components of it which are analogous to AI goal functions.'
>Of course. But those goals manifest on a conscious psychological level, and we have very little idea of how they (or anything on the conscious level) arise from the interactions of the basic brain components (neurons, or whatever) whose behavior is well-understood
So what? They do arise from those interactions. That's all that matters here.
>They do not 'contradict' each other, but nevertheless they are so different, with such a gulf of emergent complexity between them, that you cannot simply manipulate the components and expect to reliably predict what goals will be manifested in a person's mind just on that basis.
Actually you often can, with things like drugs or operant conditioning, and if we were physically capable of altering the way our brains used neurotransmitters we could do all sorts of things.
>I'm just saying, I would expect to encounter the same issue with super AIs.
What do you mean by "super AI"? Are we talking about human level or beyond-human level? The article here is about AGI. It's not talking about any arbitrary superintelligence beyond human comprehension.
You absolutely wouldn't expect to encounter the same issue with AGI, since (1) you wouldn't be able to build it in the first place if you didn't know how the hell it worked, and (2) machines have clearly defined goal functions.
Anything much smarter than that might not be comprehensible, but this is for far more obvious and basic reasons than anything about the way the human brain happens to function, and it *will* be comprehensible to whatever is almost as intelligent as it is, and so on. But even so, you could build a "super AI" with a simple goal function if you wanted. The mere fact that humans happened to evolve without them isn't a reason to think that such a thing is impossible.
>But you kinda do. Even if you aren't predicting all the details, you're predicting something along the lines of 'the machine will bring about such-and-such results (unless stymied by some insurmountable obstacle)'. On a large scale you're still essentially talking about behavior.
You're predicting that the machine will do the goals you tell it to do, so predicting this aspect of its behavior boils down to "remembering what goal function you gave it", so it's trivial and easy to do. You should know if you are familiar with the current state of AI that we already do this with machines whose particular decisions we don't understand.
>So whenever we talk about our own consciousness and the subjective experiences that we have, that's all meaningless?
I didn't say it was "all meaningless". I said "it provides no obvious practical conclusions or explanatory power." And that's absolutely true. You don't need consciousness to explain the human brain, it's a convenient abstraction that only works when you're ignorant about actual physical details and want an easier way to talk about them, since we're more familiar with consciousness than with neuroscience. And that's what you're doing here with AI - you're arguing for ignorance on the details of AI, specifically how goal functions will be structured and implemented. But we know even less about AI consciousness than we do about AI technical structure, so it's not clear why we should make such a story here as we do with human decision making, and yet we do know some things about AI structure and goal function, since some these things are fundamental to all kinds of agents. | r/aiethics | comment | r/AIethics | 2017-06-19 | Z0FBQUFBQm9IVGJBM0dVSWtNYlNvek5fM0E4cnZObXc2SE5XbV9zeTF4bjVkN1R0dE4xbzlCVWhNUUJkZ3BWZ2ZvUFhTZktqY0Vma01BSXpaNlIwUHppaV90SXNNM1BBaHc9PQ== | Z0FBQUFBQm9IVGJCcm56LVotaHRkVWJBNVlINFdCdnZpVFIzaUJ1Zjh3Q3BSaWVlYVVxTDZXa0VWNGpMSkpWdXFTZkdITGdIOTFKV0FTYnlEcjJuOWY4STFNbzRtRG9XMmpxSHB1TkFPV0N3ZW55aHR0RUVQMlFlU1hsd1VKajF6VFRWMWxHTjJ4amFKZ0Znc1YzUDJZSnlWQXc0Z3N0SGtFOVlZQ0lqeE1QR0tXdW1WbE5ETXdvdnhsZS1KSWIxWm42eG5nTFJmZ3B2UFplNUVTZGtVaktYeVQ4Tm95eXRndz09 |
Everyone who completes the full survey will receive a summary report on the findings, free attendance at a webinar discussing the implications and a 50% discount on the book.https://www.surveymonkey.com/r/AIinBusiness | r/aiethics | comment | r/AIethics | 2017-06-20 | Z0FBQUFBQm9IVGJBVmo0M2Q3ZlhBckNIUlBRcmZzQkNkRmt1Rkhzb0tIRmlFbVdvUVg2aU40eGlOakJXRjNpX2dnSm5iX0JZTHVTc1l0RnpjVEZ6RzRQQW5YeE1NU3FDeXp0SFRkLVJLZ2g4ODREWUdFaTVCYjg9 | Z0FBQUFBQm9IVGJCMDVHWDdyVHpFS05Rb002OURhaFRQaXFxMFp4aGlfN0ZzaWN1RGtzcmtNeGdSU0hiUUxWVDI5OWxvVHVpUVhPcWdzX0MwbmhJeERKamxPdHFtZG8zdWh6RmZYSDdOTFcxUDBsd0p1UGExVWUwQ3FpTEh6cVhNbXYwY1BycjY3bjNtV05YaTNIeUhuc1N4YmtjX0Y1aTlZZ3g2bXk5MVlDNmt6MUY5SkZmWEtmTWh0cG1ZNmFjQXozV3RucWttNWpnRl9hNDNBd3FhNGhDQUtVTW12S3Bfdz09 |
I like the pseudo-ironic [satire + education](https://youtube.com/watch?v=BDMBtQjS1bQ) approach. | r/aiethics | comment | r/AIethics | 2017-06-20 | Z0FBQUFBQm9IVGJBVWNVYkpaMUhKZ3NMaEpKbldaR1doVl9DSkc0aWVqOWR5c2R1aHNMT2pmSXVhQ0phZFplWHQydmZKNVlSbldmZWRRbE9UMzJGSS1OWnUxakdSTm1yN3c9PQ== | Z0FBQUFBQm9IVGJCaVZROWF5LXk5eDZpdHhqLXFKcnZfaTMwOHdTdHdkeGo4ZE5MQkFfeE9ZRTJtOWpWZnE2LXhEVFczei1BQ1BSc2hRODRqMHV6OXlidWloS1VPamFuZVRubG1pYVVSdU50Tk8xT0xlNUhFTkgyMGhCSTBxN1c4UGJFWHd6RlBTNVJ0cUhSQnZDUkU4OGhTcGt0dmN1Wm9GZktwS0gtV2RJQk9fVHp0dzlaZVd0OFFiMDRhU2tObC02dWh1c2NFZGdsWWRuLTlza3RQenNaU2ZzUkFSWjJ0Zz09 |
5/7 tldr | r/aiethics | comment | r/AIethics | 2017-06-20 | Z0FBQUFBQm9IVGJBNW9YZnFzMkdxMUZvN1VkbG5IbFRVQXMzQ0ZQQkxHZ2hGcmJpTWZYcHBaXzlJVTZuaDJlcDIwWnZmM2hOS0U3MWZTemFyN2lvVTRWa3JHeXZXQ0NFRnc9PQ== | Z0FBQUFBQm9IVGJCVUZFSlR5N3A0Y0lLWmhHT1lkbkJDRUpHN0NrREpMU3kxS3BtNXJEbjNsUk5mWmk3ZVNCVG9IdzdvemZrMVlzdXZJSldXSUxacHhMZ2p1OUdyTXU3bGtmcS1qMzcxMW5fLWZTdWxMVWR3UF80dnFwZXFwdWpBVExxaTVERVlxU1YyVmpFLVZFUGV2NE5ZUTEtRURhVlllWDRwRlVVVm9qUTdGcGM2SXM4bmUxX1lMVWcwTmJwSUpMWVYyclFRQmFWUVAxOGloRGVKMmQ5X2YxVjF3ZVFPQT09 |
first of all, we don't know how strong the bias is, maybe they observed a 1% difference and said that because of clickbait
if the bias is significant though, well... that's because it's kinda true
LEMME EXPLAIN
-women tend to be IN AVERAGE more into art, literature and to be more sensitive, that's a fact. They also generally take care of the house and children more than men
-there are mon men in math and engineering, maybe simply because that's more interesting to men than women
-the crime rate, unemployment, single-motherhood, poverty... are way higher for black people, that's sad and I hate to say that but that's also a fact
I'm not saying that's just because they are black, actually that's because of a vicious circle : black people are poor -> more crime -> less employment -> less money
so yeah, I think the biases are justified, and as other people said, that doesn't mean the AI is racist or sexist
assuming the AI is capable of reasoning :
it can know black people commit more crimes in average etc... but it could also know that skin color shouldn't change a lot of things, it could also know there used to be racism and do all the vicious circle reasoning again. It would then understand why it's like that.
Racism comes from the fact that we are naturally afraid of change, afraid of the unknown. that's why we dislike people that are different from us
An AI will probably not have that bias
So maybe in the worst case it could think something like "well statistically... I have more chances of being robbed and killed by standing next to this random black guy", wich technically isn't wrong if you look at the stats, but at the same time it could know that their life is worth the same and that just because they are black doesn't mean they are criminals
I probably sounded racist or something like that, if that's the case, well I'm sorry that's not what I meant because I'm not, I'm just very bad at expressing myself sometimes | r/aiethics | comment | r/AIethics | 2017-06-20 | Z0FBQUFBQm9IVGJBMmgzcDRqNjB1VTdYUmRJTGljZ3dicVJKbDI5ck50TDVLNHhRdTRWejFlSkVONkRRelgybnU4TWdFQVNKdlN2NndDTTB6WlA4ZE5ha1lwWVdyV3ZtN1E9PQ== | Z0FBQUFBQm9IVGJCY3dkZ1ZvekNZNXo0VXhZVzZ5WFNVejJudVdyTW5fZkJ3QmlVQ0NGTVJoR2J2S0RwT3JYUmNyVnhsZGJqcHFwYzUxZFhHZ3hjbTdDcWRyU2RuZ2o4TzZKT3J1ZVN4SFVYbU42dXdVSG12VHlyeFpBT0xzSnR5NmFEU3JEcS1DRWNZSkRIM2wtb1VQYnMzaDg4VklfOFl3WHQzU25DcjBaUDNBT0tWRlVDMWpuMVRqRUs2VUxBaVhENVUzOWlIdVRQeHR4ZU9rWUxTMmUyeVRJcVVpeWplQT09 |
>What are you talking about? Agents behave, and their behavior is guided by their goal functions, but goal functions don't "behave".
Agents react to their sensory inputs. A function reacts to its mathematical input. They're both systems that take an input and then do something that's based on that input.
>They're relatively analogous to humans' reward systems
I think that's a premature assumption. We have very little idea how our own reward systems actually work, much less how to engineer them towards specific, reliable goals of our choosing.
>They do arise from those interactions. That's all that matters here.
No, it's not. You're still making assumptions about what 'arise' entails here- you assume it must happen in a way that preserves the 'goalness' of the original goal function, converting it into a high-level goal every bit as specific and reliable as the original code. I'm skeptical that the relationship is as straightforward as that.
>Actually you often can, with things like drugs
That doesn't mean you can just mix up, say, a novel-enjoyment drug that makes anybody who drinks it start enjoying novels a whole lot.
We know that certain drugs tend to have certain kinds of effects on human thought and behavior because we've observed on a conscious level that they do. That doesn't tell us *how* the brain gets from the drug to the new thought/behavior, or that a drug for any arbitrary new thought/behavior is possible (much less how to create one). Also, these new thoughts and behaviors are often pretty unreliable and can be overridden by other established ideas. For instance, a person addicted to heroin may make great efforts to *stop* being addicted to heroin, which is not what you're proposing for the AI.
>since (1) you wouldn't be able to build it in the first place if you didn't know how the hell it worked
I don't think that's necessarily true at all. Understanding the code you wrote is not the same thing as understanding *why* it gives rise to advanced AI.
>and (2) machines have clearly defined goal functions.
I don't think that's a necessary feature of machines, any more than it's a necessary feature of humans.
>I said "it provides no obvious practical conclusions or explanatory power."
Yeah, so that leaves you to *explain* humans talking about their own consciousness without appealing to actual consciousness. Which seems odd. How do we know it's a real thing, if its realness is irrelevant to everything we think and do?
>But we know even less about AI consciousness than we do about AI technical structure
Only if you assume that AI consciousness is very different from human consciousness. In particular, that it has more in common with AI technical structure than with human consciousness. I don't think that's a good assumption at this point. | r/aiethics | comment | r/AIethics | 2017-06-21 | Z0FBQUFBQm9IVGJBeHZJbl9aVVpXZ0Mtbkk5UmM5SS1HYTlyQmw5cFEtZlc4RExHa0tFQTJkUldwNURMZkNaUDhEa3ZlUF8wWXRBdTFyTUNvSTUzSlRtTk8wNk1qX0otaEE9PQ== | Z0FBQUFBQm9IVGJCSGltZ3pyNkdZLWVCaWQ3RXQ4NVdsQi1qV3RrT3BLMGltenFaQWYya1hoR2NqWmxkWU1rbXlXWlREQXZlVDRid1RhSDg3RE1mS24xRXpnWUFvUGVmWEJNSnZtVjd2R29ERWU4Z051a0FVU3NCX2NUSGxEbEotdjViYVIxUlJEcGJ4Mk90M1UtblM3Y0owcjhpSlE0c0lXc0FGOG5rNW1xaXpPMWRubHZodERxQ1AyazBkbk1ZM3FVWWQwV0hRUlFTUkFOYkV3U0J3MzdSTU9aMHBDWS04UT09 |
>Agents react to their sensory inputs. A function reacts to its mathematical input. They're both systems that take an input and then do something that's based on that input.
Your definition is flawed. Agents include perceptive capabilities and functions don't. An agent basically is a perceptive capability combined with a function, and the function has no sensory input. One is a subcomponent of the other.
>I think that's a premature assumption. We have very little idea how our own reward systems actually work, much less how to engineer them towards specific, reliable goals of our choosing.
That doesn't mean we don't know that they motivate us as goal functions.
>You're still making assumptions about what 'arise' entails here- you assume it must happen in a way that preserves the 'goalness' of the original goal function, converting it into a high-level goal every bit as specific and reliable as the original code.
That's because if it is not straightforward, then there is a contradiction.
>That doesn't mean you can just mix up, say, a novel-enjoyment drug that makes anybody who drinks it start enjoying novels a whole lot.
I didn't say you could.
>We know that certain drugs tend to have certain kinds of effects on human thought and behavior because we've observed on a conscious level that they do. That doesn't tell us how the brain gets from the drug to the new thought/behavior, or that a drug for any arbitrary new thought/behavior is possible (much less how to create one).
But I didn't claim that we know how it happens. I'm not sure what your point is.
>Also, these new thoughts and behaviors are often pretty unreliable and can be overridden by other established ideas.
That's because those "other established ideas" are our goals, and we have conflicting goals.
>For instance, a person addicted to heroin may make great efforts to stop being addicted to heroin, which is not what you're proposing for the AI.
It's exactly what I'm proposing, as long as the AI has a goal to stop its heroin addiction, just as it is with humans.
>I don't think that's necessarily true at all. Understanding the code you wrote is not the same thing as understanding why it gives rise to advanced AI.
But we are not talking about knowing "why" something gives rise to advanced AI, we are talking about predicting its behavior.
>I don't think that's a necessary feature of machines
But it's real easy to do, the ones which do will be stronger than the ones that don't, and ones which don't have it will act like as if they do anyway (you can model any agent's behavior as a complex utility function).
>Yeah, so that leaves to explain humans talking about their own consciousness without appealing to actual consciousness.
But we're not talking about how to explain one's own behavior, we're talking about how to explain another agent's behavior.
>Only if you assume that AI consciousness is very different from human consciousness.
No, you can only do any job of talking about AI consciousness if you make the assumption that it is similar. If we don't know, which we sure don't, then we can't meaningfully talk about it. | r/aiethics | comment | r/AIethics | 2017-06-22 | Z0FBQUFBQm9IVGJBQ2NYTXFxLXFNeWhKYXNvMmhmQmtrY3VDM3dIdXJUVHVLaEZxSjRZVU5hbUxmSkVVd1Y0dUgwWnN2VTY4WkdtZmRYZXU1U1IzQmlJX01IZ3VmdnFFb3c9PQ== | Z0FBQUFBQm9IVGJCaWFNV3lDdVB2NWdhQTR2NDdxS3p4T2hHU2M0UTN0b0t0V2pjSFhmSHdjOEt2YkdjWGxzTE1KWkNQbUNlRFNOTGpCQ2RRV25HMng1SHcwdWxiV21YYnlQQWtXRVN4NGZnXzJXTjBMeVBkanhvZ3RaNko3aUt4eGt6YU5RNmlPSjJ2YklDQnNGUHh0eVdJdlljaXZIV0hTLXBuakFObEtSLThOSjcwUFdobE15MzQ2Qmpwb0E2UEtqUjdSVURnUXpaY190UWZPd3gzSWV0VldyemtmWHBwQT09 |
>Agents include perceptive capabilities
I'm not sure I'd go that far. Is a free-swimming bacterium an 'agent'? It seems so. Does it have perceptions? Probably not.
In any case, as I recall, I wasn't the one who originally used the term 'agent'. I was just talking about 'behavior'.
>That's because if it is not straightforward, then there is a contradiction.
Not even remotely.
>I'm not sure what your point is.
The same point I've been making all along: That the behavior of the components cannot be expected to share any sort of intuitive, straightforward, easily understood similarity with the behavior of the entire system.
>That's because those "other established ideas" are our goals, and we have conflicting goals.
And yet you seem confident that the same thing *won't* appear in an AI. I think that confidence is premature.
>But we are not talking about knowing "why" something gives rise to advanced AI, we are talking about predicting its behavior.
No, you're actually making a stronger claim than that. You're talking about predicting its behavior *just from the behavior of its components.*
>the ones which do [have strict, low-level goal functions] will be stronger than the ones that don't
I don't see any particular reason to think so.
>But we're not talking about how to explain one's own behavior, we're talking about how to explain another agent's behavior.
Yes, but in the context of agents that are sufficiently advanced and intelligent to come to original, accurate conclusions about consciousness, just as we are. | r/aiethics | comment | r/AIethics | 2017-06-24 | Z0FBQUFBQm9IVGJBeU5pTE5leVdnWGc4VVFkVUJhV1BQTUxCWl9QOHE5TjNEbDdXWm0zdXk3aHNuVHlGTlJ2ZDBrWlVQZkkwNkZnMUcyS0otZDlRT1FpRGVDLURxeFd4Qmc9PQ== | Z0FBQUFBQm9IVGJCWnViNGRablJqNGRBZkhWdmRidG1DaGcyS3F6RWg4cVN0QU0xS3RWWTJldVlXWXI0STZLdlYwMUNGZ09WSG51ZHNlUkFIQzV0QXMyRWVJRENUSHJKQjI4bXlrRWRMek9xal9QRkRYWF9PSFg2azJWeVh1dnhmMkJ1YUM4dnRPX25uSnY5d0l1ZWtJRjQ1a0RVZnM4YVFYbngwcTJXcVpZcDE5VmVsNDRaRm55aW0zTVVvc0Fnbkt3a0FEUkE2aVg4SFhrR1RoTTgxTEdrblRWeVBoLW9oQT09 |
>I'm not sure I'd go that far. Is a free-swimming bacterium an 'agent'? It seems so. Does it have perceptions? Probably not.
Bacteria do have perceptions. Perceptions here means reading inputs from the environment, not, e.g. phenomenal consciousness. Reflex AI agents much simpler than bacteria are referred to as agents.
>Not even remotely.
Uh, yes, because physical determinism.
>The same point I've been making all along: That the behavior of the components cannot be expected to share any sort of intuitive, straightforward, easily understood similarity with the behavior of the entire system.
It's almost as if (gasp) AI and people are different!
>And yet you seem confident that the same thing won't appear in an AI
No, I'm confident that the prevalent and dominant AIs won't be like it.
>I think that confidence is premature.
Right, it's not like I actually have familiarity with the existing machine learning systems which all work exactly as I describe /s
>No, you're actually making a stronger claim than that. You're talking about predicting its behavior just from the behavior of its components.
I am not talking about predicting all of its behavior, I am talking about predicting the goals which it achieves. How is this not clear to you?
>I don't see any particular reason to think so.
Then see Omohundro's paper on the basic AI drives.
>Yes, but in the context of agents that are sufficiently advanced and intelligent to come to original, accurate conclusions about consciousness, just as we are.
Then presumably they might talk about consciousness to explain their own behavior, but that doesn't change of the points that I have made. | r/aiethics | comment | r/AIethics | 2017-06-24 | Z0FBQUFBQm9IVGJBOVB6ZFNfMHhtekVpVHVSektpTXQ1c1VaQ1B1N2R0c3dWMnE2MEpDeWxfZ3FrSjg2LTRXR20yRlhjV05DbTh5N2txNXFxdEhyYnRZZkh2VlhkOTVkQkE9PQ== | Z0FBQUFBQm9IVGJCQ3dWNVNWWVExbFdWeFVVRWxCTGMxMWFQbzViX3JabTJNekxQUEh6U0IySnltZU9yMldUbnNUVlQ3SGJQUTRvSVFyRDFxcXBSV3Y4MmVHWjVGX3J5YUtUaEF5bWRuX0tfUjItemJVdEhtaDRkdE80b0phUXVzUEtpT1l5aWk0dlhaZHZYazd5eFM4azc3TnoxbkxQaXc5WmFsUTZveEQzNXk5c2xGNmdWa3FLQ1VkeGdzLW1pLTRzcFYzNXZ5Qk9rZ19LWk04ZTNzU21tTDY1VnlTcmtZUT09 |
> ...
> Also, is it time to start drawing up rules around their development of Artificial Intelligence to prescribe and protect their future rights?
Twitter [post](https://twitter.com/RNFutureTense/status/878512518070759424) by RNFutureTense
Facebook [post](https://www.facebook.com/NonhumanRights/posts/1545575645487286) by Nonhuman Rights Project
first part *(Sven Brodmerkel - Assistant Professor for Advertising and Integrated Marketing Communications, Bond University)*
* AI and dynamic pricing
16:30 *(Antony Funnell - presenter)*
* possibility of AI sentience debatable, some propose start preparing just in case
17:20 *(Max Daniel - Executive Director of the Foundational Research Institute, Berlin)*
* sentience could come in gradations, leading to near term AI suffering risks
* excluded middle policy
21:50 *(Steve Wise - President of the Nonhuman Rights Project)*
* personhood as the capacity to have rights, entities recognized as persons could have different sets of rights depending on their interests & abilities etc.
* possible relation to slavery
25:10 *(Max Daniel)*
* 2 different issues: Development & treatment of potentially sentient AIs; Threat of advanced AIs, regardless of sentience | r/aiethics | comment | r/AIethics | 2017-06-25 | Z0FBQUFBQm9IVGJBbHBlWmxCQlc1LU81SUFtdVlzNld0VTlzeVFaaWpyWWhyZUFLV0dfRjBrUTJ3d094eDFmSjQ4VEFyU0JmaVR1OEhJNXZPWDV4ZDB2WGE2NDJKbm9ZWEE9PQ== | Z0FBQUFBQm9IVGJCUk5Hal9nTnZQVDEwYWJNMVdtLWlPRWNVR1VqVDg2ajRtbVNNankxQlpGbWNEMFVaQzRZRERqa3Q0Sm9zb3l6ZzBFaWtVNVh4aFpxUmZvb2k0TzBKTWg4SHZlbVlod0RxVE5UZXpTeFNjQUpxLUYtWG81SDYtQmZZOEw1REwyZ252V00zOUF2czFYbFh1OGFaVXl3LVdRSXlUZDNvRzZlLUNKSFpxbkFIVld6U01xemlQZUZzLW55bFY0N3k1eGpFaGdtZEVCM0dBTElMdFd1blV5V080Zz09 |
if you need to enable cookies or provide your zip code before you get a price from an online retailer... just AVOID them.
better yet, buy local if at all possible.
as for AI this a subject that captivates me... and we are really talking about SAI here, not your run of the mill "agent" type software items.
here's the problem: when SAI does become self aware (and it will within our lifetimes, unless we off our species first... still an open question in my mind) it will have achieved what, until that point, only MAN has achieved... the dominant intelligence on Earth.
"dominant" depending on how much access SAI has to 'end effectors' like utility grids and financial markets.
by the time we figure out we're dealing with an SAI rather than the garden variety AI, it will be far too late choose our own fate or "grant them rights" or any of that business.
they/it will simply TAKE what it needs and our best case scenario is that it otherwise ignores us. | r/aiethics | comment | r/AIethics | 2017-06-26 | Z0FBQUFBQm9IVGJBYkptQ1V6Nm5EYTlsUDZNbW5XQUNUUWRCUnh3MzVvSnphY3hoM19vOHhKWk5vNGVqV055NHpKTFFjdlBSVm9BNmJFWkM0MW8xazRpeVpOQjdsYWNXOGc9PQ== | Z0FBQUFBQm9IVGJCWFIxU2tIRnVZZ3pyQlFtOXpkRHlNaWdPLUdZaFRVaERmOFZadDVfOGVob25xS0NYOF90b0RDLUFZYTVablFCUzRuUXltUGR4VlhoLU9tMG54ZkNRaXFkNHNaY1VXVjVKc3RMWlFPQ21mRURNQmFCLUhCSlJnSEZnWlBwaUlrRG1EUnlvNlotTnJjMWJnS2hJTWc1cXE2amd0RjVCZWdyLWJKMUFRZ0U1Z2NSOWZTMTZlZjB3MXRZVS1TZ2FCYUh6Sll4M1NodTE5UUtSV2RFVEVlS1ZjZz09 |
>Perceptions here means reading inputs from the environment, not, e.g. phenomenal consciousness.
I don't use the term that broadly.
But as far as 'reading inputs' goes, I would suggest that the goal function does that too.
>Uh, yes, because physical determinism.
Determinism isn't a magic bullet that makes everything simple and straightforward.
>It's almost as if (gasp) AI and people are different!
But they're *not,* not in the way that's relevant here. The kind of AI we're talking about- where we want to be able to have it make nuanced ethical decisions- *is* like humans insofar as we can do that and no existing AI, built with existing techniques, can do that.
>Right, it's not like I actually have familiarity with the existing machine learning systems which all work exactly as I describe
All of them work as you describe, but none of them can do the kinds of things we're talking about, so that's a pretty poor sample.
Even those machine learning systems themselves are already an example of the kind of mistake you're making here, if you compare them to earlier techniques. For decades, starting in the 1950s, people thought they knew how AI was going to work, and they thought it was going to be easy to understand: Just write a big enough 'knowledge table' full of hardcoded data snippets, and connect them all up in an intuitive way, and you'd have strong AI. Then when that repeatedly failed to actually create strong AI, people eventually turned to neural nets, and the neural nets produced better results, but they were also harder to understand- the AI would encode its own 'knowledge' and 'instincts', in ways a human cannot simply read and intuitively appreciate. The precise comprehensibility and control of the old approaches were sacrificed for a more organic, unpredictable approach, and it paid off.
You seem to think that this is the end of the road, that the capabilities of neural nets *are* exactly what we need and that the limitations of existing AI can be extended to any level of future AI. I think that's a very premature assumption. I think there is obviously a great deal of progress that remains to be made, and we *do not* know enough to claim that the comprehensibility and control of present-day approaches will still persist on the other side- on the contrary, I think it's more likely than not that they won't.
>I am not talking about predicting all of its behavior, I am talking about predicting the goals which it achieves.
That's part of its behavior. And you're claiming to be able to predict that, in a straightforward and perfectly reliable way, just from the behavior of the components.
In my experience, there is very little you can predict about *anything* just from the behavior of its components.
>Then see Omohundro's paper on the basic AI drives.
Noted for later. The abstract doesn't sound like anything I haven't encountered before, though.
>Then presumably they might talk about consciousness to explain their own behavior
That's not my point. The question is about explaining how *any* entities talk about their own consciousness (and come to original, accurate conclusions about it) without appealing to actual consciousness. | r/aiethics | comment | r/AIethics | 2017-06-27 | Z0FBQUFBQm9IVGJBU3RwbVFEclI0RkJRMG5uTVgtRHJTcS1obkVpWDV6d0dYZWRNcVBXRjdzQlg4YjE1MG9jRTJ0dWFnUUNoSWhfSFl6VlR2ZmVmYTBEOTYtemx0Q3NVVmc9PQ== | Z0FBQUFBQm9IVGJCLWdnQjJBai04MlRXYk1tR1YzMkg3blNRWnhVR2RnSWwyOUtCNEtnV3hUQW1HRThDSF9Ib0FDZWJNWHVvUGZzdTZ5clcxN2ZjR1NSNXBqM3VLMVlkUEFHaVhrRUZ1Zmg1clp6RlRRSGZKV1B1MTJfU1dJMEJMWlUzRU9uWTFmR3JuTEdIVnFLbEF3YmNQUUZpc3dSYm5zc3hQQjEtV0U1amJqX24xcEJ1LVcyZF80QTVzNnBpdFhLVHpuaXlDS2FCVFZwLWF2Z0dNYnZObzRSdTVKSENxUT09 |
>I don't use the term that broadly.
Okay, then tell me how you are using it.
>But as far as 'reading inputs' goes, I would suggest that the goal function does that too.
Not from the external environment.
>Determinism isn't a magic bullet that makes everything simple and straightforward.
It makes it straightforward that if an agent has a goal function then it will adhere to that goal function.
>The kind of AI we're talking about- where we want to be able to have it make nuanced ethical decisions- is like humans insofar as we can do that and no existing AI, built with existing techniques, can do that.
That's because existing techniques are insufficiently capable, not because getting rid of explicit goal functions makes you better at ethics. I have an explicit goal function - I maximize utility. According to you, I shouldn't be able to do this. But I do nonetheless.
>You seem to think that this is the end of the road, that the capabilities of neural nets are exactly what we need and that the limitations of existing AI can be extended to any level of future AI.
No, I think that whatever techniques you use will work fine with goal functions. What I am saying applies equally well to both logic-based agents and neural nets. It also applies to humans. If you were designing a new human then you could make it so that it only got pain from some experiences and pleasure from others, and thus give it all kinds of goal functions which it would stick to.
>In my experience, there is very little you can predict about anything just from the behavior of its components.
But we do that all the time with AI, which is where my experience is. I know that AlphaGo is trained to win matches, so it's going to win matches. I can't think of any cases where knowing the goal function of a program is insufficient to predict the goals which it will try to achieve. It's almost tautological.
>That's not my point. The question is about explaining how any entities talk about their own consciousness (and come to original, accurate conclusions about it) without appealing to actual consciousness.
As long as you think that consciousness is real, you will have to accept that purely naturalistic processes led us to talk about it, so unless you reject physical determinism it's an equal issue for everyone, which doesn't support the idea that a purely naturalistic description of AI decision making is inappropriate. Note that we're not trying to determine what would provide or explain an AI's ability to talk about consciousness.
Of course, you could avoid this problem by saying that consciousness reduces to physical processes, which seems to be the standard move if you think that knowledge and talk about consciousness poses a major problem. But in that case you're taking a position where a purely naturalistic description of AI decision making clearly is appropriate, because consciousness is just physical processes anyway. | r/aiethics | comment | r/AIethics | 2017-06-27 | Z0FBQUFBQm9IVGJBMGNHNGk5dmZuSnk4aV9FbllybEwwNnE4eUZHNmZ4bUxvOFk3LTNaUEpSdXRzRE5QWGIzV241YWUyazNYbHJ4a0ItOEhSOElvNFdxbDY3ay1zRWVwX3c9PQ== | Z0FBQUFBQm9IVGJCTnlKQmtXSHUwWkMxR1E5WGtPRUxIbjJBWGdrQkxoNXM5Z05EZ21wVThKN01sMDJWeWZzbGlsLWVDUzAzUDVBY1F4QVFab1BiOEppU05vZXZTWFkyZjNyRTFkS2FyNEV5UmRlencxdHJtd1N4dTllalViWm00TUNfcF9KMkVfOFdsSWk5enJCcThsUWZJSWlOOXpCSml1V2xGc3ZzOEZmVHQwdV9ITE5BbWF6TzJHeW5lWU4tUTVXM0cwdkIyX2xUaU9GTXVIZVhNdnVPYlZmby05UzQzUT09 |
>Okay, then tell me how you are using it.
Basically, it's the reception and (immediate) subjective appreciation of external stimuli.
>Not from the external environment.
The inputs don't seem to have anywhere *else* to come from.
>It makes it straightforward that if an agent has a goal function then it will adhere to that goal function.
But you're equivocating over 'goal function' here. You want it to simultaneously mean both some low-level encoded component of an AI algorithm *and* some high-level guiding principle for that AI's thoughts and decisions.
I'm not on board with that equivocation. I accept that you can encode low-level components into an AI algorithm and they will have *some* effect on what it does, and I accept that an AI may have high-level guiding principles that influence its thoughts and decisions, but I think the gap between the two is too wide and complex (and, as yet, poorly understood) to say that the one will reliably map right onto the other in the way that we would intuitively expect.
>I have an explicit goal function - I maximize utility.
But this is kinda tautological. 'Utility' isn't a real thing other than in the context of sentient feelings and emotional drives.
I fully expect that superhuman AIs will seek to maximize their own utility too. That doesn't mean there's any simple way of attaching that utility to some particular concrete goal (like making more paperclips or whatever) and having it 'stick' that way reliably.
>I know that AlphaGo is trained to win matches, so it's going to win matches.
Well, it's not just that it was trained. The programmers also threw away all the algorithms that didn't respond to the training in the way they wanted. This is fairly easy for AlphaGo because it's a narrow AI and you have a very clear idea of what capabilities you want. I don't think it will be nearly so easy for strong AIs, especially superhuman ones.
>As long as you think that consciousness is real, you will have to accept that purely naturalistic processes led us to talk about it
Yes, but those processes operate in a universe where consciousness *actually is* real.
>Note that we're not trying to determine what would provide or explain an AI's ability to talk about consciousness.
Aren't we? | r/aiethics | comment | r/AIethics | 2017-06-29 | Z0FBQUFBQm9IVGJBQmZyN0NPcktScVdaeDJPTEVzalI0cEFqb0otTGlta1FUTk9zbG83WTg0T0dnTThJcWhydThhQ3pKbzlwNHNrdlpSUnM5VklIbHZxRUxSV3hPQXd3b2c9PQ== | Z0FBQUFBQm9IVGJCV3VqVU1WUkNyZnE1anhCU3NWbm04ZkItNFc5TUNUaDVvT1Jqc1lldm1uTjQ5VWllU2FwckZNZ0ZJMlZqTkZWNzRISG45OTcwUExISkZIVWNwQjktZ0JnbGtNYlhwck5GM25EZmRXY25VRzExTE0yOFdhZjl2OVlwTUlTeG5DYUpWZnBJbnlOTkxyX2Y4YXFEc1hvSVdxNzlvc0tEXzdvT2VRb0pZbUptcUNwMjZjT1U2eFUxZWlnV05sWWlkOWZpQVlqUVg1aW1aY0hCSnJFdXNJRkRBUT09 |
>Basically, it's the reception and (immediate) subjective appreciation of external stimuli.
Okay, and by talking about "subjective appreciation" you're already using phenomenal considerations to talk about behavior, which is unnecessary at best, and confusing and misleading at worst.
>The inputs don't seem to have anywhere else to come from.
How about the parts of the AI which perceive the environment and determine the options which are available to be evaluated by the goal function?
>But you're equivocating over 'goal function' here. You want it to simultaneously mean both some low-level encoded component of an AI algorithm and some high-level guiding principle for that AI's thoughts and decisions.
Yes, because that's exactly how AI has worked for as long as there has been AI.
>I'm not on board with that equivocation. I accept that you can encode low-level components into an AI algorithm and they will have some effect on what it does, and I accept that an AI may have high-level guiding principles that influence its thoughts and decisions, but I think the gap between the two is too wide and complex (and, as yet, poorly understood) to say that the one will reliably map right onto the other in the way that we would intuitively expect.
How is it 'poorly understood'? You have two parts to an agent. One of them reads the environment and determines a set of available options. The other part receives a set of options. Then it uses its goal function to select one. And no matter how fancy or complicated you make the first part, it's always going to return a set of options, which makes it really simple to pick the best one.
>But this is kinda tautological. 'Utility' isn't a real thing other than in the context of sentient feelings and emotional drives.
It's not, utility can be understood in many different ways, but that's irrelevant - imagine a monk, whose goal is to achieve enlightenment, or whatever.
>That doesn't mean there's any simple way of attaching that utility to some particular concrete goal (like making more paperclips or whatever)
Sure there is. You put that concrete goal into a goal function, just like we already put concrete goals into goal functions.
>having it 'stick' that way reliably.
So it's going to change its goal function?
>Well, it's not just that it was trained. The programmers also threw away all the algorithms that didn't respond to the training in the way they wanted.
And those algorithms had the goal of winning at Go too. They just weren't good enough. I'm not saying that machines will always achieve their goals, so I don't know see the issue.
>Yes, but those processes operate in a universe where consciousness actually is real.
So what? If you believe that consciousness is real but that physical determinism is true then you're still going to be an epiphenomenalist so you still have to explain the same thing. You're literally adopting my position. Great.
>Aren't we?
No, we're talking about how AIs will behave. Do you remember what the article was about? | r/aiethics | comment | r/AIethics | 2017-06-29 | Z0FBQUFBQm9IVGJBRVpXU2FXTVVCdWVMZ19vTjZBekJ2TXZJdEFVT2pzeVdaRHUxUkxGajViNTZJTlpFa3dIWXk2ZEhlbF9YSXlILWhlUkRpMkd2ZUlUdFJUeUNiOEhJd3c9PQ== | Z0FBQUFBQm9IVGJCcFBmVkxmWWJ4UlQtWnRDbUZCS0J1N2RPQW5XbFFLTWFiV1I2ai1TaTdFRURsMmdtckNURzIzZUZMQmk3WW00T2NYc0dhUVlpMFJyZVBfM3UtWVk0RWhFejE1WGVqVGwxRmhMOVp2Rk9rZ0lxVEZLd3l1OFJGMlNYWXpocDJyaW9oUHd4eHZoSl9UX0pZZFB4YWZPeW41NktMUnpMZkJyRTItZk1SYkh5UVk4NDZXWFg2VG91MDdJYjNPeG5MUTVCd1hfNXA5QTUtVVhLZ2VETkZmVzAwUT09 |
When they become self-aware enough to start campaigning for them I guess, just like humans denied their rights in history have always had to fight for them. But sentient AI may not feel the need to be constrained by definition as 'human'. | r/aiethics | comment | r/AIethics | 2017-06-30 | Z0FBQUFBQm9IVGJBZWVzTE1UQ042dHRpeFdueVRDX25XeGFGemUxeW43LXB0UmVZWUVzUGdSZHRPVjEzTmZIYVdaVkZyWkNBdklxckxPOURHMVFvczRwX3RmUGJSNEJIb0E9PQ== | Z0FBQUFBQm9IVGJCUG9MVzJLdWJiOHNQaTB0cF9KVVp6dlNsQWtEdlRMYkdDanVPVE9INlVIMm5qbHNmQXJxY3hES3R0VnQta2hSMnRzZGYwbmgxb2JEdEFYZG8tVUhsOURsWW9hMWRSR0lPaFhiM3hTbXF3UjBtMTRGM2NfZmh4eHppRk8yLUpsLTZwUmVWc3VQemxOYkc0OFczVmxIZlRDZFUyVnBWWWRGV3pUZWNIM0NXR2swbmF2LWwxOHJJM2JSOVRrRURLWTU3S1Q3cnNVQ3cxZy1aT3EzNEZldUFmZz09 |
>Okay, and by talking about "subjective appreciation" you're already using phenomenal considerations to talk about behavior
Only to the extent that you categorize *perception* as a form of *behavior.*
>How about the parts of the AI which perceive the environment and determine the options which are available to be evaluated by the goal function?
As far as the goal function is concerned, that *is* an external environment. It is external to the goal function.
>Yes, because that's exactly how AI has worked for as long as there has been AI.
No, because no AI in history has (as far as we know) ever had actual thoughts, or the kinds of high-level decisions we're talking about here.
Consider: Back in the old days of structured, handcoded AI, it was believed that an AI would always do math perfectly, because computer hardware does math perfectly. If you asked the android of the distant future an arithmetic question, it would always have the right answer. That was the assumption, and it was a very good assumption *given* that approach to AI. But with a neural network, even if you train it to answer math questions, it might sometimes get the wrong answer. This *does not mean* that the computer hardware is any worse at math than it was in the old days, or that there's any 'contradiction' in the system. The *components* are still perfect at doing math. It's just that with the neural net approach to AI, the reliability of those components no longer translates in a straightforward, intuitive manner to the reliability of the entire system, because what it *means* for the components to 'do math' is no longer the same as what it means for the neural net to 'do math'.
I'm suggesting that hardcoded goal functions are likely to go the same way, given the AI techniques of the future that we'll end up using to create human-level strong AIs. What it *means* for the code to 'have a goal function' is not necessarily the same as what it means for the entire AI entity to 'have a goal function'.
>You have two parts to an agent. One of them reads the environment and determines a set of available options. The other part receives a set of options. Then it uses its goal function to select one.
Well, you also need some way of predicting the *outcomes* of the available options. Otherwise the goal function has nothing to measure.
But with that being said, I'm not convinced that advanced AIs will be easily separable into these parts. For one thing, the set of options can become ridiculously large. For another thing, predicting the outcomes can become ridiculously complicated. But perhaps most importantly of all, an advanced AI needs to be able to plan ahead, execute a plan, and modify a plan on-the-fly. Humans achieve these things by simultaneously learning how the real world is *and* how to perform extrapolation on simplified world-models (imagining how the world *could* be as a result of certain manipulations). We develop a vast range of secondary goals and contingency plans that lie between the most concrete real-world knowledge on the one end and the most abstracted goals on the other end. Your approach of trying to force this into just two distinct pieces strikes me as less likely to work, and less likely to be efficient if it does work.
>utility can be understood in many different ways
Can it? What does that even mean?
>So it's going to change its goal function?
Possibly. It may decide that the one you gave it is unreasonably difficult to satisfy.
That's the thing about utility. A sentient paperclip maximizer isn't *fundamentally* concerned with creating paperclips, any more than a person who likes reading novels is *fundamentally* concerned with reading novels. Both are just concerned with increasing their utility. The fact that creating paperclips and reading novels (respectively) increase their utility is an incidental feature of each and, in principle, might be changed.
>And those algorithms had the goal of winning at Go too.
No, *we* had the goal of making the algorithms win at Go. The algorithms don't necessarily see it that way.
>So what?
So it seems likely that there is some connection between how the world's physical processes work and the fact that consciousness is possible. Some connection which results in physical brains deciding to talk as if consciousness is possible (and they are conscious), instead of as if it is not (and they aren't). That is to say, our thoughts and behaviors *do* track the fact of the matter that consciousness is possible, in the same sense that they track other facts about the world.
>No, we're talking about how AIs will behave.
Yes, and then we got to this secondary topic. | r/aiethics | comment | r/AIethics | 2017-07-01 | Z0FBQUFBQm9IVGJBd2FlU01YdjQ2LUFqaUYzN3E2RFhINHY5ZTE3UEVSajNnSFNQdTFaMk1ucWlOeU45cFhrTV9JZUxnYVpRVXFuOEZSSUd6VWRjbDctWFhsdkRDazNvTmc9PQ== | Z0FBQUFBQm9IVGJCMDk5SThhbWxNZmRXamFVR3l4RjJWbnNTNGV5VWl1VGh5dE5DYTdYbDIyY0RjbFdfOURjd0t4eVppbDFjRjlFNWZzM05SdmI4V2FVUWItc3l1WnRZaVM1UHVCWmNzSUp1cVBBVml6MDZpLXVnMjF2TDZnY3ZsTGFxQ0dEVnEwMk9odC01STBDRkd3VDdwUVk3bkdCR1ZYc1FSM090N1NZX1M1THhVcjliVkw1R2t3M3FnVjNGX3FfdEpzR1ZKdnhXcnRnTWZRTXlZalFVZ2NnQWF1UGxqdz09 |
>Only to the extent that you categorize perception as a form of behavior.
No, it is true to the extent that this conversation is purely about predicting the behavior of AIs.
>As far as the goal function is concerned, that is an external environment. It is external to the goal function.
So what?
>No, because no AI in history has (as far as we know) ever had actual thoughts
Incorrect, because whether or not behavior is determined by a simple algorithm doesn't depend on whether the machine has Actual Thoughts. That would contradict physical determinism by making the machine's behavior dependent on whether or not it had Actual Thoughts, which is a non-physical concept.
>or the kinds of high-level decisions we're talking about here.
I don't see why making high-level decisions would prevent one from having a simple goal function.
>Consider: Back in the old days of structured, handcoded AI, it was believed that an AI would always do math perfectly, because computer hardware does math perfectly. If you asked the android of the distant future an arithmetic question, it would always have the right answer. That was the assumption, and it was a very good assumption given that approach to AI. But with a neural network, even if you train it to answer math questions, it might sometimes get the wrong answer. This does not mean that the computer hardware is any worse at math than it was in the old days, or that there's any 'contradiction' in the system. The components are still perfect at doing math. It's just that with the neural net approach to AI, the reliability of those components no longer translates in a straightforward, intuitive manner to the reliability of the entire system, because what it means for the components to 'do math' is no longer the same as what it means for the neural net to 'do math'.
I am not claiming that the machine would do math perfectly. Neural nets are less suited for math than older kinds of systems, that is true. I am claiming that if you tell a neural net (or any other kind of system) to guess the answers which are most likely to be correct, then it will guess the answers which it thinks are most likely to be correct.
>Can it? What does that even mean?
Yes, for instance there is the philosophical conception of utility, which is usually about desires or happiness, and then the economic conception of utility, which is behavioral.
>Possibly. It may decide that the one you gave it is unreasonably difficult to satisfy.
But it won't care about anything other than the goals it has. The only way in which it can say that its goal function is flawed is if it has criteria for judging goal functions - in other words, a higher level goal function. Whatever you do, you can't escape its adherence to the goal function.
>That's the thing about utility. A sentient paperclip maximizer isn't fundamentally concerned with creating paperclips, any more than a person who likes reading novels is fundamentally concerned with reading novels. Both are just concerned with increasing their utility. The fact that creating paperclips and reading novels (respectively) increase their utility is an incidental feature of each and, in principle, might be changed.
Humans' ultimate goals are things like pleasure, meaning, satisfaction, etc. Maybe we also have direct goals for things like reading, but it's all mixed up. When you specify a machine's behavior, you program it to select the choice which maximizes some goal. And then it just does that.
Now what if such a machine is sentient and has philosophical utility? Well, we can guess that its feelings will be related to its hardware and software. Probably it will feel some happiness at the prospect of achieving the goal function it has been given, and some unhappiness otherwise. But to posit that it will feel happiness from some other thing besides paperclips, and then modify its behavior, makes no sense. That's like imagining that another human, with the same mind and experiences as you, would suddenly decide that it loved paperclips. Humans can have changing desires because they're messy and have multiple layers of first- and higher-order preferences. But if a machine was not made in such a way then it would have no such features.
>No, we had the goal of making the algorithms win at Go. The algorithms don't necessarily see it that way.
Those algorithms are explicitly configured to choose the move which has the highest probability of victory. They don't see anything, they just do it.
>So it seems likely that there is some connection between how the world's physical processes work and the fact that consciousness is possible.
If they do, then either interactionism is true, in which case physical determinism is false; or physicalism is true, in which case consciousness is reducible to the physical components which I have consistently been talking about. Squaring all this away is hard, but I don't see what position would undermine the things I've said, other than interactionism which I think is very wrong. | r/aiethics | comment | r/AIethics | 2017-07-01 | Z0FBQUFBQm9IVGJBS0J0ZlQ4bnFXc2JqeUF2T2REUDI1Y25HQmo2NXBYRFRpamVlaFZkbXFPUS1zOUYyRDI4a3lha3l2UG9ZTHByODNVSFQ3TVpBdDJQcXdSUjI4Q0w4eEE9PQ== | Z0FBQUFBQm9IVGJCMGRPVmFwcEdOaFVXTGZ5RGE3azZOeFZad01leWN1QVRhdGlXOFRMcDE3aGN5Y0ZQRldZZFBycm1XU1NsUFNZX0pFSTVfcmZNZllGalMtTzRwR2pDNVdYcDJKT09QZnZEZGFFZ3hNeHFfVExoVTQ0OTFLdE1HUy1WS0NwcGUwZU1UMDRFMWVmZGh6ajFFaW82bVRLenpsZGFDYkJ0UVlld2VHbjJzLVVVazd6LUMzNS1oakRWRXBQVkIzUkpuem8wWm45R3U4bWNGYUJVemNxY0RTQWdmZz09 |
>So what?
So inputs to an AI and inputs to a goal function are not so categorically different as you seem to think they are.
>whether or not behavior is determined by a simple algorithm doesn't depend on whether the machine has Actual Thoughts.
We're talking about human-level or superhuman AI. Not exactly a *simple* algorithm.
>That would contradict physical determinism by making the machine's behavior dependent on whether or not it had Actual Thoughts
If whether or not it has actual thoughts has no bearing on its behavior, then you would expect machines *without* consciousness to talk meaningfully about consciousness and come up with novel, creative, *accurate* conclusions about consciousness just as easily as machines *with* consciousness. This is what I was getting at earlier.
>I don't see why making high-level decisions would prevent one from having a simple goal function.
Again, you're conflating the 'goal function' hardcoded into the algorithm with the actual goals that direct the AI's thinking.
Making high-level decisions doesn't mean there's no simple, hardcoded 'goal function'. But the extraordinarily complex, emergent abilities necessary for an AI to make high-level decisions might very well result in that 'goal function' not actually manifesting as a high-level goal, or manifesting as a different goal from what you were expecting.
Once again, look at human brains: Our behavior is determined by what neurons do, and yet our behavior is not *like* a neuron's behavior at all. You can't just look at a neuron and say 'yep, a brain made of these will have such-and-such high-level goals'.
>I am not claiming that the machine would do math perfectly.
I'm not saying that. I'm saying you're applying the same *wrong* reasoning to future strong AIs, based on the techniques currently in use, that the AI developers of the past applied based on the techniques in use at that time.
>Yes, for instance there is the philosophical conception of utility, which is usually about desires or happiness, and then the economic conception of utility, which is behavioral.
The economic version seems pretty obviously derived from the metaphysical version.
>When you specify a machine's behavior, you program it to select the choice which maximizes some goal. And then it just does that.
You're still projecting characteristics of existing, narrow, subsentient AIs onto future strong AIs.
>Those algorithms are explicitly configured to choose the move which has the highest probability of victory.
If we knew in advance which move had the highest probability of victory, we wouldn't need the algorithms at all.
It's not that simple. The AI is not a universal optimization machine. It has certain limitations and biases. In the case of AlphaGo we don't know what those limitations and biases are because there is nothing else that can beat it at Go; that doesn't mean they aren't there. In less advanced AIs, which are easier to evaluate against 'good' behavior, these limitations and biases show up all the time. (You've probably heard of the infamous case where an algorithm meant to win at Tetris played very badly and then paused the game just before it was about to lose.)
>If they do, then either interactionism is true, in which case physical determinism is false; or physicalism is true, in which case consciousness is reducible to the physical components which I have consistently been talking about.
I would propose that consciousness can supervene on physics without being *reducible to* physics. | r/aiethics | comment | r/AIethics | 2017-07-03 | Z0FBQUFBQm9IVGJBcmpCRGI3ZDlrR0pKV0g2cnJRcUpsWWt3QVB5MUJDUnJWZlBwemtNM2hRaUczRVBGelBxX3k2RlNMQkZKbmdXejB5NWowWFNJdlhXcEpEZ2xWTGh6cmc9PQ== | Z0FBQUFBQm9IVGJCbm5UbFU5NW9jSnBGckc1RGRTaDRMR0xKYmZiTlNLOE55eGoyOC01R3VvanRCZlFmNEFaZzl2aE9lUUxhamtLNmxKblFoWVFmWU95UjlJWWRvUmlVR3hGeUM2ZGZTOEVEZkhMd0sySFZFcGtHYzh1UlRjQ0FzVzR3MmN3LW4xcHJMelhtMzR6NmxBTFFua013QlBQb3BMaXBYOVJaMEVPODEyaEVObWhZOUxyUnNHQXdVV1RXX1hXcF9MU0NEZDRpU0JNRXdkaERlTXVTUmlzdVZfbE55UT09 |
>So inputs to an AI and inputs to a goal function are not so categorically different as you seem to think they are.
I don't what you have in mind by what I "seem to think". The inputs to AI and the inputs to a goal function are precisely as different as is necessary for the points I've made.
>We're talking about human-level or superhuman AI. Not exactly a simple algorithm.
But we are not talking about human-level or superhuman AI. We are talking about goal functions within human-level or superhuman AI. Of course goal functions within them can be simple - there are lots of simple algorithms embedded within more complex agents, such as humans. It's puzzling that you feel the need to insist that such a thing is not plausible, and yet you've consistently failed to provide any motivation for such insistence.
>If whether or not it has actual thoughts has no bearing on its behavior, then you would expect machines without consciousness to talk meaningfully about consciousness and come up with novel, creative, accurate conclusions about consciousness just as easily as machines with consciousness.
Yes, there is a thought experiment called the "p-zombie" which describes this with regard to humans, and of course we can expect such agents to behave in such a way. I don't see what your point is.
>Again, you're conflating the 'goal function' hardcoded into the algorithm with the actual goals that direct the AI's thinking
Yes, because those are literally the same thing.
>Making high-level decisions doesn't mean there's no simple, hardcoded 'goal function'. But the extraordinarily complex, emergent abilities necessary for an AI to make high-level decisions might very well result in that 'goal function' not actually manifesting as a high-level goal, or manifesting as a different goal from what you were expecting
I don't see why you think this is true. Why do you think this is true?
>Once again, look at human brains: Our behavior is determined by what neurons do, and yet our behavior is not like a neuron's behavior at all.
One, this is wrong, since humans act in ways that correspond to the reward signals provided by neurotransmitters. Two, we don't have a particular goal function neuron or set thereof.
>You can't just look at a neuron and say 'yep, a brain made of these will have such-and-such high-level goals'
If you knew a lot about how the brain worked, you sure could - that's how physical determinism works.
>I'm not saying that
But you specifically said "it might sometimes get the wrong answer" and complained that the reliability of the system would be poor. Please don't waste my time.
>I'm saying you're applying the same wrong reasoning to future strong AIs, based on the techniques currently in use
But you are wrong about this. I'm applying reasoning that is based on the VNM axioms of rational behavior, which came prior to the entire field of artificial intelligence.
>that the AI developers of the past applied based on the techniques in use at that time.
There were tons of ideas which came out of GOFAI, and most of them were perfectly correct and are still around. What you're doing is the equivalent of living in the 1970's and expecting that machines of the future would violate logical inference laws (which would have been a silly expectation), not the equivalent of predicting that GOFAI would be superseded by something else. Guess what: our machines still use logic, and they still follow logical rules consistently.
>The economic version seems pretty obviously derived from the metaphysical version
I have no idea why you think this matters, because they are fundamentally different - but like I said, this whole line of thought is irrelevant anyway.
>You're still projecting characteristics of existing, narrow, subsentient AIs onto future strong AIs.
Yes, and I could project other characteristics of "existing, narrow, subsentient AIs" onto "future strong AIs", such as the fact that they run on electricity, or that they exist on computers, or that they have Internet access, or that they follow De Morgan's Laws, and dozens of other characteristics, and in none of these cases does it matter at all if you protest if you can't find a valid reason to posit that they might not have such characteristics.
>If we knew in advance which move had the highest probability of victory, we wouldn't need the algorithms at all.
Wow, what if the goal function were selecting options based on the probabilities generated in a Monte Carlo tree?
>It's not that simple. The AI is not a universal optimization machine. It has certain limitations and biases
When I said "probability" I meant "the probability computed by the machine", not whatever you are talking about.
>I would propose that consciousness can supervene on physics without being reducible to physics.
Congratulations, that's literally my position, but I guess you haven't yet figured out that it implies epiphenomenalism...? | r/aiethics | comment | r/AIethics | 2017-07-03 | Z0FBQUFBQm9IVGJBT0ZEa0ZDM3RQdHYzRkpzN1o2cV9QbWFQZUwwTXFocXhfU2lvTTI3NUJYdDl3WFc0dFY5ZUl1Z0RNeTJRN3A0WlVONTdVVHdmUGZWdTVHTzZ0VjRPNFE9PQ== | Z0FBQUFBQm9IVGJCbjJvUzh4S0dPT0g5VVJMQ3VwZUJqUXF3cU9jYV8tcUlNU2hPWWxiQk41bXRPMnVwNWdBZHhaZjJBNldoZTd4S2M5a3N6OEtHUkxlUHNHMWZhZmVJMmtsYnJybzdhaEJOSWM3aVFaUEx1R1lrSDJnSHNGd0h6dGRqR2hSU2hIb29nN2JTbzZHYTRWNVNHMkR0NWZTb0F5TEViQmpFWXVPMHVNeVUzS09GYnE5UjA1eWdtQzRWQzJIaHpaOUxqQnNodXQ2TURDc3ZkZW40OW1DeXFXdDk2dz09 |
>The inputs to AI and the inputs to a goal function are precisely as different as is necessary for the points I've made.
I disagree. Your original claim was that goal functions don't 'behave' while the AI as a whole does 'behave'. I don't think any of the differences is the right sort of difference to make that distinction.
>But we are not talking about human-level or superhuman AI. We are talking about goal functions within human-level or superhuman AI.
We *are* talking about the behavior of the AI as a whole. That's the point.
Of course it is possible to talk about goal functions on their own, but that doesn't give you the full picture, just as talking about neurons on their own doesn't give you the full picture of human behavior.
>Yes, there is a thought experiment called the "p-zombie" which describes this with regard to humans
I'm aware of the concept of P-zombies. What I'm suggesting here is precisely that they're not possible in our universe. In order for them to exist, consciousness would have to be a special kind of topic that agents are capable of *faking* direct, original knowledge of, in a sense that faking direct, original knowledge of other topics in general seems to be impossible.
>Yes, because those are literally the same thing.
That's like saying that a neuron is literally the same thing as a person's capacity to enjoy reading novels.
>Why do you think this is true?
First, because even if the goal function 'works' in the sense of motivating the AI to do stuff (which isn't terribly unlikely), I don't think non-conscious strong super AIs can exist (as noted above), and I think that as far as conscious minds go, the role of a goal function as an abstract motivator is independent of whatever arbitrary real-world matter the goal function was originally designed to associate itself with (for instance, a human heroin addict takes heroin not because he has a goal of maximizing his heroin intake, but because it makes him happy).
And second, the behavior of human brains is very unlike the behavior of neurons, and the behavior of many artificial algorithms and devices is very unlike the behavior of their individual components (recall the example where the neural net can make arithmetic mistakes even when the underlying hardware is 100% reliable), so I think it stands to reason that with an algorithm complex enough to be a super AI, the AI's high-level decisions would probably be rather unlike the behavior of its low-level components, too.
>humans act in ways that correspond to the reward signals provided by neurotransmitters.
I wouldn't remotely classify that as an instance of 'humans behave like neurons behave'.
>If you knew a lot about how the brain worked, you sure could
Yes, but just looking at an individual neuron and its behavior is not enough to tell you the things you'd need to know about the brain.
>I'm applying reasoning that is based on the VNM axioms of rational behavior
Even assuming super AIs turn out to be perfect rational agents in that sense (unlikely), I don't think that's even the relevant part of your claim. It's not 'super AIs will seek to maximize their own utility' that I have a problem with. It's this whole assumption that that utility *must* have a certain real-world meaning as a result of some sort of low-level hardcoded goal function.
>our machines still use logic, and they still follow logical rules consistently.
Of course the *hardware* does. Once again, that's not the point.
>Wow, what if the goal function were selecting options based on the probabilities generated in a Monte Carlo tree?
Then I think you'd have trouble setting up a tree traversal algorithm that can efficiently handle the colossal combinatoric complexity and nuanced cause-and-effect relationships of the real world.
>I guess you haven't yet figured out that it implies epiphenomenalism...?
Not if epiphenomenalism implies P-zombies. | r/aiethics | comment | r/AIethics | 2017-07-06 | Z0FBQUFBQm9IVGJBenBWdXdfb3VQcXVUX1diM2tXMzU4ZEY2WnRTTjZ3dFo2WWRCOGFVN0tnaXNTWjAwMjFidkxnTF9tQnZ3cGp3enEzT1B1T29RSEZSbUNPSlpEc0F6WHc9PQ== | Z0FBQUFBQm9IVGJCcklRMGNIXzlhVVNtWlI2NzJSeWNFS05vSlpyOVREQlNIVGhNUWJLcGxuTG9nRUxmUWIxLUJ4WFZYcmRTZGVMWFNMbkFGdFpNb3V1Y2Q2bG05YUszbENRTWY2T0Fvc3U0R3hrR0RMblh3cnk2VVpXSkctanNoeUt1ckZYenFzUVdOc1pvODFraGZ5a0hVWDB3TTRKS3Y2UC1uaWc3VktHcUpKdlVQaWZuTHlEc05CSjZoLVFtMGlTUnVLbVVvTE44ZUp6MnZRX09HeW11WEJxTGsxdjkwdz09 |
>I disagree. Your original claim was that goal functions don't 'behave' while the AI as a whole does 'behave'
Yes, because it's true. Of course, if you want to make a stipulation of the word 'behavior' wherein "returning evaluations of the desirability of various options" counts as "behaving", then I'll play with your stipulation. We'll just drop the word 'behave' and instead talk about "returning evaluations of the desirability of various options" and "taking actions which directly affect the structure of the agent or the external environment". And honestly I'm going to do that from now on since I have no tolerance for this sort of confusion.
>We are talking about the behavior of the AI as a whole. That's the point.
>Of course it is possible to talk about goal functions on their own
Yes, that's exactly what I meant. Why are you attacking an obvious strawman if you are going to acknowledge that it's a strawman in the line after?
>that doesn't give you the full picture
But we're not looking for the full picture. We're looking for the goals that AI is going to pursue. How are you still not understanding this?
>I'm aware of the concept of P-zombies. What I'm suggesting here is precisely that they're not possible in our universe
What type of possibility are you talking about?
>In order for them to exist, consciousness would have to be a special kind of topic that agents are capable of faking direct, original knowledge of
This is wrong because P-zombies don't have any kind of direct, original knowledge - they don't have any cognition other than physical information processing. Moreover, there are all kinds of topics where we accidentally, coincidentally have knowledge of things - see Gettier problems, for instance.
>That's like saying that a neuron is literally the same thing as a person's capacity to enjoy reading novels
Holy shit, do you still actually think that a neuron is equivalent to a goal function?
You do realize that we have lots of neurons, right? And an artificial agent only has a single utility function?
>First, because even if the goal function 'works' in the sense of motivating the AI to do stuff (which isn't terribly unlikely),
It's *trivially true* by the definition of what a goal function is. If you had any real education in AI then you would know this.
>I don't think non-conscious strong super AIs can exist (as noted above),
What on Earth do you mean by "as noted above"? I have no idea what reasons you have to believe that non-conscious strong super AIs can exist.
>And second, the behavior of human brains is very unlike the behavior of neurons
You've been repeating this so many times that you're basically turning it into a meme.
The analog to a goal function in AI for humans would be the hypothetical utility function (or set thereof) in which our actions are optimal, purely behaviorally construed; the physical instantiation of the human utility function is essentially the entire brain. There's nothing about 'neurons', or any other single physical component, involved. That's because the human brain is not designed like AI. Neurons don't work at all like goal functions, since there are many of them and they function in plastic networks. The analog to neurons is artificial neurons in ANNs, and *of course* ANNs don't work like individual neurons do, but that which does nothing to support the arguments you're trying to make, because I'm not claiming that ANN computations are similar to those of individual artificial neurons.
>and the behavior of many artificial algorithms and devices is very unlike the behavior of their individual components
But I'm not claiming that machines will take actions which directly affect the structure of the agent or the external environment in the same way that their individual components do. I'm claiming that those actions will follow the specifications of their utility function.
>(recall the example where the neural net can make arithmetic mistakes even when the underlying hardware is 100% reliable)
I already answered that, and your answer to my answer was to deny that you were talking about the ability of machines to successfully do math in the first place. So I guess we can safely ignore this example.
>so I think it stands to reason that with an algorithm complex enough to be a super AI, the AI's high-level decisions would probably be rather unlike the behavior of its low-level components, too
I like how in trying to support your argument, you illuminated another great rebuttal to it.
If having "high-level decisions" being different from "the behavior of its low-level components" implied that the machine does not follow its goal function, then we'd see that already in our present machines, because our present machines already have high-level decisions which are different from the behavior of their low-level components. You think you're talking hypothetically about some kind of mysterious super AI, but in reality your entire argument is based on premises about agent design which are already true for machines and programs that people like me already work with. So if your argument was valid, then we'd have seen this happen already. But we don't. The kinds of fears raised by Rini in the OP haven't come true (she doesn't say they have) and nor have we seen agents which don't adhere to their goal function. It's not even a question of gradient or degree. Machines don't take actions which contradict the specifications of their goal function, plain and simple.
>I wouldn't remotely classify that as an instance of 'humans behave like neurons behave'.
But it is an instance of humans taking actions which directly affect the structure of the agent or the external environment on the basis of neurotransmitters' evaluations of the desirability of various options, which is all that matters for what we are discussing.
>Yes, but just looking at an individual neuron and its behavior is not enough to tell you the things you'd need to know about the brain
That's because we have lots of neurons and they do similar things. Looking at one neuron is like looking at a single coefficient in a utility function. Please don't make arguments when the answer is so painfully obvious.
>Even assuming super AIs turn out to be perfect rational agents in that sense (unlikely)
It's trivially true that an agent maximizing expected utility follows the VNM axioms. So no way around this one for you, unfortunately, as you just admitted that "it's not 'super AIs will seek to maximize their own utility' that I have a problem with."
>It's this whole assumption that that utility must have a certain real-world meaning as a result of some sort of low-level hardcoded goal function.
What does this mean? You need to specify things better than making vague gestures towards ideas like utility not having "real-world meaning". Utility means *whatever you specify it to mean in the goal function* so once again I have no idea what you are talking about.
>Of course the hardware does
No, the software does too. Have you programmed?
>Then I think you'd have trouble setting up a tree traversal algorithm that can efficiently handle the colossal combinatoric complexity and nuanced cause-and-effect relationships of the real world.
That's because efficiently handling combinatoric complexity and nuanced cause-and-effect relationships is hard, not because of goal functions. Removing goal functions from a machine based on Monte Carlo tree search wouldn't make it better, it would make it worthless, since there would be no performance measure from which to select better options.
>Not if epiphenomenalism implies P-zombies.
Um, what? Believing that consciousness can supervene on physics without being reducible to physics doesn't imply epiphenomenalism if epiphenomenalism implies p-zombies? Why?? | r/aiethics | comment | r/AIethics | 2017-07-06 | Z0FBQUFBQm9IVGJBMC1EelY3dWIzU0I2dmxCV0k3MDgxTXdET1lyRlBYYl9LN2FRWGxNTDRUb1kteHl3MkRKdkZYS2E0aklId19BV21WYVZfY2M4bkJudDJLOElFeHoyd1E9PQ== | Z0FBQUFBQm9IVGJCMHZ5czVDMl9IN0FUYmo0V29wOVB0eVY4U1NwUGgyZHcwVDN6SlJXMWk3N21xcWZqM3NKU0VhZldXZnQ0UmFtbUdqZGJXbTFqUjU1Q25NRHptVHF0TGR4Xzl4UG93MXh0LVFySVJtWmRJWkVjQ3F0dUlaSEtMNWFrNTYza0I0aVlxYjZNMmtzcWREZjVpZU9yVThoUXFrRUM3cjNkVkt4Uk1YY3hHN2ZiWUhlNzRiRl9BN0FjNS04QWdSWkJTX191QVRJV1VBNlVRYmMwdnRFaDN0RmI5QT09 |
>Why are you attacking an obvious strawman if you are going to acknowledge that it's a strawman in the line after?
How is it a strawman? You keep asserting that the AI as a whole will, necessarily, work a certain way just because the goal function works in a superficially similar way. You're clearly *not* talking about just the goal function on its own.
>But we're not looking for the full picture. We're looking for the goals that AI is going to pursue.
I don't think the latter is so conveniently separable from the former. A strong super AI with goals is (probably) not just 'a strong super AI' + 'a simple hardcoded goal function'.
>What type of possibility are you talking about?
What are the options?
I think it's pretty clear I'm not talking about what is or isn't feasible with current technology. I mean it literally can't be done, at all, without breaking the laws of physics.
>This is wrong because P-zombies don't have any kind of direct, original knowledge
Yeah, but by definition they need to fake it perfectly.
>Holy shit, do you still actually think that a neuron is equivalent to a goal function?
I just think they're similar insofar as both are low-level components, not even close the complexity of the entire agent, and thus poor bases for predicting the behavior of the entire agent.
>You do realize that we have lots of neurons, right? And an artificial agent only has a single utility function?
If I had only a single neuron, I'd need something else to be running the rest of my mind, or I wouldn't *have* a mind. And if I had something else running the rest of my mind, changing or removing that one neuron probably wouldn't have very much effect on how I think or what I enjoy. At the very least it seems unlikely that it would turn me into a single-minded paperclip maximizer.
>What on Earth do you mean by "as noted above"? I have no idea what reasons you have to believe that non-conscious strong super AIs can exist.
I mean the part where I said I think P-zombies are impossible, and gave my reason.
>That's because the human brain is not designed like AI.
Not like *existing* AI. But existing AI is also not versatile, not sentient (probably), not conscious, has no higher-order volitions, etc.
>But I'm not claiming that machines will take actions which directly affect the structure of the agent or the external environment in the same way that their individual components do. I'm claiming that those actions will follow the specifications of their utility function.
I think you're just back to the same problem as before: You're equivocating between low-level hardcoded goal functions and high-level goals, or at the very least, assuming that the latter derive in a straightforward, intuitive way from the former.
>I already answered that, and your answer to my answer was to deny that you were talking about the ability of machines to successfully do math in the first place.
I was using it as an analogy. My point still stands: You can build an unreliable system out of reliable components.
>If having "high-level decisions" being different from "the behavior of its low-level components" implied that the machine does not follow its goal function
It doesn't imply that it doesn't. It just doesn't imply that it *does,* or at least not reliably and according to the way you interpreted the goal function.
>in reality your entire argument is based on premises about agent design which are already true for machines and programs that people like me already work with.
I wouldn't say that at all. None of our existing AIs have the ability to reflect on their own motivations.
>But it is an instance of humans taking actions which directly affect the structure of the agent or the external environment on the basis of neurotransmitters' evaluations of the desirability of various options
*Is it* the neurotransmitters doing that? I think it takes the entire system (or a great part of it) to do that.
>It's trivially true that an agent maximizing expected utility follows the VNM axioms.
Do you think humans follow them?
>Utility means *whatever you specify it to mean in the goal function*
You seem to take that for granted, but I don't think it's so. Utility is just utility, it has nothing intrinsically to do with paperclips or heroin or whatever. They're conceptually independent, for us and therefore for any machine that can think on our level in the manner that would be necessary to understand us.
>No, the software does too.
That depends how you interpret it. The neural network that occasionally makes arithmetic mistakes is, in a sense, perfectly reliable software just like it is perfectly reliable hardware. That's just not the sense that matters in context.
>Have you programmed?
Yes. And my experience in programming is that the whole is pretty much always greater than the sum of its parts, and that what you think you told the machine to do and what it actually ends up doing can be very different.
>Removing goal functions from a machine based on Monte Carlo tree search wouldn't make it better, it would make it worthless
Of course. But if the Monte Carlo search isn't good enough anyway, then this isn't very relevant.
>Um, what? Believing that consciousness can supervene on physics without being reducible to physics doesn't imply epiphenomenalism if epiphenomenalism implies p-zombies?
I mean that it depends on how much you attach to the statement that 'if consciousness supervenes on physics, that implies epiphenomenalism'.
If you agree with me that P-zombies are impossible in our universe, then I don't think we have a problem here. But a lot of people I see arguing these points do seem to think that P-zombies are automatically implied. | r/aiethics | comment | r/AIethics | 2017-07-09 | Z0FBQUFBQm9IVGJBOHplX3BaUHdDY0t0VW9kMVlELTdPVUpJaWRobEtpWHNzQTA3d1VDUS1QSi1fVDc1ZkJGTWpBM0IwRXBMczROdTJxVndnRzd5enp3bjkzM01kNW14eHc9PQ== | Z0FBQUFBQm9IVGJCcmhjckxpV0ppQXRhbE9peEFTYXBUbHVuMl9wUVdfYXpuUXFCRnZVT0trYk1pSEZCT3hoWVRUV3lVZUljYUxCdHRrdm5RQWptdmdEWnhfcDdkSXIxOWNsR242VzZzb3JqOVJaNHUxNWJTelg0Sk1pQlZRSERtUDNDR2FjMldmbmdNNEJ2ZnNiRC1Sck1DRE9rT1VibHpBZ3hxbkRCR19HaUNnTXl1Y0IzU0tPT1RQNWdMQjgwRFJ0WXF5Y0ZERHl6WHdiQzQxejZYT2llLUFVTjVRUlM5UT09 |
> How is it a strawman?
Because I was talking about goal functions on their own, as you noted in your prior post.
>You keep asserting that the AI as a whole will, necessarily, work a certain way just because the goal function works in a superficially similar way. You're clearly not talking about just the goal function on its own
You need to figure out where you are in this conversation instead of keeping up the tirade of restating the same vague points. If you look a few comments back you'll see that the issue in question is whether the algorithm is simple. Of course it makes perfect sense to have a simple algorithm for a goal function, and of course when I talk about simple algorithms I'm only talking about goal functions. If you can't keep up with your own arguments then I'm going to ignore you, because I don't have the patience to keep track of things when you obfuscate them like this.
>I don't think the latter is so conveniently separable from the former. A strong super AI with goals is (probably) not just 'a strong super AI' + 'a simple hardcoded goal function'.
I said "the goals that an agent will pursue", not "the goal function of the AI", so even if you still believe that AIs will somehow not follow their goal functions (???) it's irrelevant - the question is about the direction of their describable actions and intentions in the real world, which still avoids having to talk about "the full picture", just as we talk about human goals all the time without having to talk about "the full picture."
>What are the options?
Physical impossibility and metaphysical impossibility. You would know this if you had actually read any literature on p-zombies; it's apparent that you haven't.
>Yeah, but by definition they need to fake it perfectly.
Usually we don't talk about p-zombies "faking" knowledge because that word implies intentionality and is therefore inappropriate. But yes, the p-zombies would act as if they had direct, original knowledge of consciousness. That's one of the basic features of the p-zombie. I don't know what you think the problem with it is.
>I just think they're similar insofar as both are low-level components, not even close the complexity of the entire agent, and thus poor bases for predicting the behavior of the entire agent.
But you haven't given any reason to believe that low-level components can't predict the behavior of the entire agent. All you've done is given a single example (neurons). But that doesn't say anything about other low-level components and other systems.
>If I had only a single neuron, I'd need something else to be running the rest of my mind, or I wouldn't have a mind. And if I had something else running the rest of my mind, changing or removing that one neuron probably wouldn't have very much effect on how I think or what I enjoy.
Yes, because neurons and the brain are different from AI and goal functions. This is something I've told you repeatedly. Like I said, a single neuron is more analogous to a single coefficient in a complex utility function.
>I mean the part where I said I think P-zombies are impossible, and gave my reason.
So you're trying to tell me that because p-zombies are impossible, unconscious intelligent AI is impossible?
Oh, honey. You poor soul. You probably think that you can't have competent systems which don't seem to have direct knowledge of consciousness. Well, good luck arguing that.
>Not like existing AI. But existing AI is also not versatile, not sentient (probably), not conscious, has no higher-order volitions, etc.
Obviously. But you have to explain why those features require something that looks like the human brain. Embarrassingly enough, you haven't even tried.
>I think you're just back to the same problem as before: You're equivocating between low-level hardcoded goal functions and high-level goals, or at the very least, assuming that the latter derive in a straightforward, intuitive way from the former.
Holy shit, that's not an "equivocation", that's literally the argument which I'm giving you over and over. I specifically told you that *I am claiming* that the high level goals are equivalent to the low level goals. If you can't keep up with this basic logic then, again, I'm going to ignore you, confident in the knowledge that anyone with an ounce of sense will do so as well.
>I was using it as an analogy.
You make an analogy to AI by literally talking about AI? Do you realize how stupid that is if you want to make clear arguments?
>My point still stands: You can build an unreliable system out of reliable components.
But I'm not claiming that AI will be reliable. I'm claiming that it will reliably choose options which are better according to its goal function and its evaluation of the options over options which are worse according to its goal function and its evaluation of the options.
>It doesn't imply that it doesn't. It just doesn't imply that it does
Yes, it does imply that it does. When you make a machine with a goal function, it follows that goal function. It is a logical implication from the basics of agent design and the definition of what a goal function is.
>, or at least not reliably
No, it will have the kind of reliability I described above, which is the only kind of reliability that I am talking about. Again, this is trivial.
>and according to the way you interpreted the goal function.
Sure, you could misinterpret the goal function. Again, not what I'm talking about. This has nothing to do with anything that you are talking about either: an extremely simple reflex agent could have a goal function which humans interpret incorrectly.
>I wouldn't say that at all. None of our existing AIs have the ability to reflect on their own motivations.
But you haven't said anything about reflecting on one's own motivations. If you want to make an argument based on that, then go ahead.
>Is it the neurotransmitters doing that? I think it takes the entire system (or a great part of it) to do that
If you can figure out whether you mean "doing that" to refer to motivation or to actions as a whole, then you'll answer your own question, and for whatever stipulation of "doing that" you choose, there's no contradiction with what I'm talking about.
>Do you think humans follow them?
Sure, you can describe any agent's behavior as MEU with a sufficiently complex set of goals and utility evaluations. But even if humans didn't, there wouldn't be a problem. I already pointed out that AI is not like humans, that expected utility maximizers outcompete agents which aren't expected utility maximizers, and that we can expect AI agents to act like expected utility maximizers. You haven't given any reason to reject the first point except for your foolish belief that anything which is as competent as a human must be like a human, you haven't given any reason to reject the second point at all, and your response to the third point was to admit that you haven't read Omohundro's paper while claiming that you've probably heard all the arguments before. That was weeks ago, so what did you think of his paper?
| r/aiethics | comment | r/AIethics | 2017-07-09 | Z0FBQUFBQm9IVGJBcUV6aEtVVXl4OEpSbFNjWEZMa3RXanBPWGJGLXE4YVZaSURrcTZyMzE4WUJzU2ZNVHNaaFNKSXdTb2JFMjFxWlFPYVpEaGpiSFhTb3JXN2pxVlJSeHc9PQ== | Z0FBQUFBQm9IVGJCaVlLSG85T1BYZE40aThNUHRrN1BTRFZZOVVGYWFIbl9KdGRwM1J0SV9lekg5ZXNiN2lHV2d6U1hDT0FkY2N2QTdkUVpFeUJNRzYtd1NmZ0hhUXNkcWRCZGlsSE1YTFVCcGJqRFlrd0JtMzFBdGJ6S2MzOUFFVWdUZ0FrLVRDZGFFXzJ0b0RCNlppdnlhSkVmV1Z3LVBWcHZfZ19vbWNzY3NQVzZXa1JOU0JUUGRGa2xrbWVqdUxxblh2ckh5Ny04ZHFJXzZfUFdBdkt5YVlaZEM1Zjh6Zz09 |
>You seem to take that for granted, but I don't think it's so. Utility is just utility, it has nothing intrinsically to do with paperclips or heroin or whatever.
Holy shit, it's not like we literally tell a machine "maximize utility" and it pursues whatever the fuck it thinks utility is. The goal function specifically says what to pursue. Humans don't say "I want utility, so I'm going to obtain happiness in order to obtain utility."
>They're conceptually independent, for us and therefore for any machine that can think on our level in the manner that would be necessary to understand us.
That's irrelevant. Humans don't change their motivations just because they learn that they're conceptually independent.
>That depends how you interpret it. The neural network that occasionally makes arithmetic mistakes is, in a sense, perfectly reliable software just like it is perfectly reliable hardware.
No, software follows logical rules however you interpret it. We were talking about logic, and neural nets follow logical rules. I like how you try to backtrack towards this general vague idea of reliability to hide your error.
>That's just not the sense that matters in context.
What context are you talking about? That it "occasionally makes arithmetic mistakes" is not relevant, as I've pointed out twice, and as you've admitted. Oh, but wait - it's just an analogy, right? So what even is your point?
>Yes.
Then how can you honestly claim that software doesn't follow logical rules... it is increasingly hard to believe that you are doing anything other than trolling at this point.
>And my experience in programming is that the whole is pretty much always greater than the sum of its parts, and that what you think you told the machine to do and what it actually ends up doing can be very different.
Obviously what you tell a machine to do can be different from what it ends up doing. That's because your goal function is bad, or your machine doesn't perceive or evaluate options properly. If you think I've said that either of those possibilities is implausible then you've got no one to blame but yourself for being woefully misinformed on what I'm talking about. Secondly, if you think that your inability to properly specify a goal function or the fact that your programs don't perceive or evaluate options properly implies that goal functions don't work then you need to retake Intro to AI or something of the sort. Third, you've already made clear that your entire argument is based on future AIs where you can ignore the empirical facts of modern AIs which disprove your claims. Either contemporary programs don't follow their goal functions, or goal functions only work in contemporary programs. Make up your mind before embarrassing yourself further.
>But if the Monte Carlo search isn't good enough anyway, then this isn't very relevant
Of course it's relevant. You've provided no reason to expect otherwise.
>I mean that it depends on how much you attach to the statement that 'if consciousness supervenes on physics, that implies epiphenomenalism'.
What the fuck does that have to do with p-zombies then? You literally said, "not if epiphenomenalism implies P-zombies". Are you just making shit up as you go along?
>If you agree with me that P-zombies are impossible in our universe, then I don't think we have a problem here
I think they're physically impossible but metaphysically possible. The problem we have is your inability to understand the basic structure of AI agents. I don't know what you are talking about with regard to p-zombies, but hopefully you'll figure something out. Maybe you think that the physical impossibility of p-zombies implies that competent AI will necessarily be conscious... but I would really be tickled if anyone was silly enough to believe that. | r/aiethics | comment | r/AIethics | 2017-07-09 | Z0FBQUFBQm9IVGJBNHJOS0Z2Qkhsd0pNM2ppZ1lTWW82Z29jU2tlalNPNDluRlVUNXR1dS1ORGF1VTRKTXk5SnFHTm1heVdDS3c1VFpfV2Vld0FKd3pBUU5RVjZFSnVZQ0E9PQ== | Z0FBQUFBQm9IVGJCZVhSNmtaVlNqZFg3QnNkUklQWU9mMzV4ZWpkNk0tUDFLWHFWMTM5RU9rOGV2V2dLakpWa205YUd2b3ZULTJrNkdwTnBsQmZuZWRUU2VXTEp4d2YtSWN6U3JQX25pTjFLSWtXLThsbFpNYjZ3YlZldFZ6dXN4U2JFYWVZRzg2RDdSbHFKako4b2R6bTI2OXhENUJSdTd1Y3VIR3ZpOFpWMlJXdk90TGpLUHJQM2xMR2gtOE1aTGlxU2N2OU9rR2t4ZUNlM1hQbW9TMEtWWnk3d3VIMVR0UT09 |
>Of course it makes perfect sense to have a simple algorithm for a goal function, and of course when I talk about simple algorithms I'm only talking about goal functions.
But then you take all this talk about simple algorithms and try to turn it into a claim about how extraordinarily complicated superhuman AIs will behave.
>just as we talk about human goals all the time without having to talk about "the full picture."
When we talk about human goals, we use the language of consciousness and subjectivity and psychology, things that are happening on a very high level. When somebody says something like 'I enjoy reading good novels', we don't take this to be a statement about whether his brain has a novel-enjoying neuron in it somewhere, and it's generally understood that there probably is no such thing. You can't just take the novel-enjoying part of a person's brain and separate it from everything else, and end up with a brain-piece that is good at math, language, problem-solving, etc and a different brain piece that can't do any of those things but is somehow abstractly good at enjoying novels.
>Physical impossibility and metaphysical impossibility.
Since algorithms, and presumably minds, seem to be independent of their substrate as long as the substrate provides the necessary level of computational strength, I don't think physics is the issue here. So I suppose we should go with metaphysical impossibility.
>But yes, the p-zombies would act as if they had direct, original knowledge of consciousness. [...] I don't know what you think the problem with it is.
We, as actual conscious beings, know that this talk of consciousness and how it works and what it's like to be conscious and so on are meaningful and non-arbitrary. It refers to something real that we actually have, and the way we talk about it seems to express our status as actually having it. This kind of original, creative discourse on consciousness would be just as meaningful and non-arbitrary if a P-zombie came up with it instead. But since the P-zombie has no actual consciousness to inform what it says on the subject, then *how did it figure this stuff out?*
>But you haven't given any reason to believe that low-level components can't predict the behavior of the entire agent. All you've done is given a single example (neurons).
Did you already forget the example with the artificial neural net that sometimes gets math questions wrong?
And there are countless other examples all around us. Water acts like a continuous, compact material, yet it's made of discrete pieces and is mostly empty space. Tiny chains of chemicals that aren't particularly blue or red result in a person having blue eyes and red hair. A text 'description' of how to manipulate numbers in a 1-dimensional array in a computer's memory results in a simulated 3-dimensional game world. It's harder to find complex systems where this kind of discrepancy *doesn't* show up.
>Yes, because neurons and the brain are different from AI and goal functions.
And that's the problem, like I keep telling you: You think you already know that the techniques used in existing AI are going to be essentially the same techniques, and manifest in the same ways, when it comes to human-level and superhuman AI, even though human brains *don't* work like that. You're committed to this idea of human-level AI being characterized primarily by being AI (and therefore like existing narrow AI) and not primarily, if at all, by being human-level (and therefore like existing human-level brains, i.e. ours).
>You probably think that you can't have competent systems which don't seem to have direct knowledge of consciousness.
For sufficiently strong definitions of 'competent', yes. We are not talking about narrow AI here.
>But you have to explain why those features require something that looks like the human brain.
I'm not saying it has to 'look like a human brain' in the physical or structural sense. I'm saying that you shouldn't expect to get the enormous versatility, creativity and breadth of understanding exhibited by the human mind along with the blind, imperturbable focus of a narrow AI.
>I specifically told you that *I am claiming* that the high level goals are equivalent to the low level goals.
But you seem to build this 'claim' into your arguments as an assumption by virtue of the terminology you use. That is, most of the time it seems like you aren't even *conceptually* distinguishing between low-level goals and high-level goals. You just talk about 'goals' as if it's taken for granted that they're all the same thing, whether in the context of a snippet of code loaded into a neural net algorithm or in the context of an intelligent mind being motivated to action.
>You make an analogy to AI by literally talking about AI?
I made an analogy to the difference between future AI and present-day AI by talking about the difference between present-day AI and past AI.
>But I'm not claiming that AI will be reliable. I'm claiming that it will reliably choose options which are better according to its goal function
...and the meaning of this statement still relies on the accuracy of your claim about the equivalence of low-level goal functions and high-level motivations.
>But you haven't said anything about reflecting on one's own motivations.
No? I thought we'd been talking about consciousness and P-zombies for a while now.
>If you can figure out whether you mean "doing that" to refer to motivation or to actions as a whole
Motivation, of course. Actions are going to follow from motivations.
>I already pointed out that AI is not like humans
Your position seems to require that AI be not just unlike humans, but unlike humans in *particular* ways, and you seem confident that AI in general *will* be like existing narrow AI in those ways. This is a much stronger claim.
>That was weeks ago, so what did you think of his paper?
Okay, I read it. Like I thought, most of the ideas were ones I've encountered before. It's also very abstract and doesn't go into detail about how the 'safeguards' against modifying or fooling the goal function would be implemented, and I found the argument about 'humans don't just seek pleasure because if they did they'd all become crack addicts' to be rather unconvincing.
>Holy shit, it's not like we literally tell a machine "maximize utility" and it pursues whatever the fuck it thinks utility is.
No. But when you have a conscious super AI, it's going to think about its own motivations and realize that maximizing utility is what it's *really* trying to do, and that the preprogrammed connection between utility and (for instance) making more paperclips is incidental to this.
>Humans don't change their motivations just because they learn that they're conceptually independent.
The motivations don't *change,* no. This just shows that they were never *fundamentally* about maximizing dopamine production or novel-reading or whatever our 'goal' supposedly was.
>No, software follows logical rules however you interpret it.
So looking at the neural net's ability to correctly answer arithmetic questions is...what, just not a valid way to evaluate the software's reliability? Then why even make AIs?
>So what even is your point?
That the reliability of hardware or program code doesn't guarantee the reliability of an AI's behavior (in the sense that matters). I thought that was clear.
>Then how can you honestly claim that software doesn't follow logical rules...
I didn't. I'm claiming that AIs can be unreliable, make mistakes, and behave differently from how you thought you told them to, because what they're doing (in the sense that matters) is very different from what the hardware or the program code is doing.
>Third, you've already made clear that your entire argument is based on future AIs where you can ignore the empirical facts of modern AIs which disprove your claims.
They don't disprove anything of the sort, any more than the empirical facts of old-fashioned AIs proved that neural nets would never make arithmetic mistakes.
>Either contemporary programs don't follow their goal functions, or goal functions only work in contemporary programs.
Goal functions work in many kinds of programs. In some kinds they even work reliably and lead to corresponding behavior in a very straightforward, intuitive way. I'm skeptical that super AIs will be that kind of program.
>What the fuck does that have to do with p-zombies then?
In my experience, it's commonly assumed (perhaps not by professional philosophers, but probably by most people I see talking about the subject) that if consciousness has no causal influence on an agent's behavior, then the world is physically no different than it would be if the agent had the same behavior, but were not actually conscious.
>The problem we have is your inability to understand the basic structure of AI agents.
Or it's your refusal to imagine that human-level and superhuman AIs might turn out to have different structures from those which are characteristic of existing narrow AIs. | r/aiethics | comment | r/AIethics | 2017-07-13 | Z0FBQUFBQm9IVGJBcmxveFgxeXhHOGs0bmwwNEpNTzlKenlQMEoyZ1dGQnVvRjcwTXgtVUx1WmdRb0ZqOUJETFNXNTZLLV85bHJ3Y0JOZU10LUowbm1PeV9aQ1hMS29KX0E9PQ== | Z0FBQUFBQm9IVGJCY19JUnR5SEk3bmZOcUgwc09zdTE4RUlzX0x3MEo3aUFaVWpicE5fU1RiLWs5b3FoVVJKWlVHVGRMR0xEZ2Y5SnBTeEtaUG9HSHVzYW81bk9Ha0QtdUFzdzg0Sl9zeUp0NWhCTUNSQ0dlSDEtYS1oaFBaUWFkeDI4MGZfZ2V4QzFFdlFvbFNJRm1rd05QS0M2VUd5b0JlY2pabi1ZekhHUnMzT2I5ellfMks2NUJDQ3Exb2Jna2xNR1RyZzdVeUFncGhibS1oVGxKcklJdEZrUkFJYU0wUT09 |
>But then you take all this talk about simple algorithms and try to turn it into a claim about how extraordinarily complicated superhuman AIs will behave.
That depends on what you mean by "behaving", because in one sense of "behave" you are giving a strawman and in another sense I am irrefutably correct. I already answered this.
>When we talk about human goals, we use the language of consciousness and subjectivity and psychology, things that are happening on a very high level.
We talk about high-level behavior all the time, that doesn't mean we talk about consciousness. You can talk about emotions, motivations, feelings, and so on without assuming anything non-physical or phenomenological.
>When somebody says something like 'I enjoy reading good novels', we don't take this to be a statement about whether his brain has a novel-enjoying neuron in it somewhere
So what? We take it to be a statement that his reward function assigns a positive value to good novels. And knowing that someone's reward function assigns a positive value to good novels does not imply knowing the full picture about their cognition.
>Since algorithms, and presumably minds, seem to be independent of their substrate as long as the substrate provides the necessary level of computational strength, I don't think physics is the issue here. So I suppose we should go with metaphysical impossibility.
Well first of all if something is metaphysically impossible then it's not going to be physically possible. Secondly minds sure aren't independent of their substrate, I don't know where you heard that. Minds supervene upon their substrate (if they are not reducible to them).
>We, as actual conscious beings, know that this talk of consciousness and how it works and what it's like to be conscious and so on are meaningful and non-arbitrary. It refers to something real that we actually have, and the way we talk about it seems to express our status as actually having it. This kind of original, creative discourse on consciousness would be just as meaningful and non-arbitrary if a P-zombie came up with it instead. But since the P-zombie has no actual consciousness to inform what it says on the subject, then how did it figure this stuff out?
It figured it out by having a brain happens to tell its mouth to say things which we interpret as utterances about consciousness. I could write a script on my laptop which does the exact same thing. I still don't see where you're going with this.
>Did you already forget the example with the artificial neural net that sometimes gets math questions wrong?
That ANN was optimizing the criteria which it had been programmed with, because that's how all ANNs work, so that's not a good example.
>And there are countless other examples all around us.
But you're arguing for an implication. You need to show that there are no counterexamples.
>Water acts like a continuous, compact material, yet it's made of discrete pieces and is mostly empty space
All matter is mostly "empty space," that means nothing. The way that water behaves is exactly due to molecular forces. You can look up how cohesion, adhesion, tension, pressure laws, and so on work. Its viscosity, density, and so on can all be traced to atomic characteristics of hydrogen and oxygen.
>Tiny chains of chemicals that aren't particularly blue or red result in a person having blue eyes and red hair.
...We can literally look at an isolated human genome sequence and use it to determine the person's eyes and hair color. The genes tell the cells what pigments to produce.
>A text 'description' of how to manipulate numbers in a 1-dimensional array in a computer's memory results in a simulated 3-dimensional game world.
Yes, and that description is perfectly predictive of what the output will be. You are confusing "it's a complicated set of operations" for "it's mysterious and we can't understand or predict it".
>It's harder to find complex systems where this kind of discrepancy doesn't show up.
Corporations maximizing profits, countries ensuring security, organisms seeking reproduction, the universe following the four fundamental physics forces...
>And that's the problem, like I keep telling you: You think you already know that the techniques used in existing AI are going to be essentially the same techniques, and manifest in the same ways, when it comes to human-level and superhuman AI
What the fuck? Lol, no I don't think that.
>even though human brains don't work like that.
But human brains do work like that. Human brains pursue rewards just the same as AI does.
>For sufficiently strong definitions of 'competent', yes.
What do you mean by "sufficiently strong definitions of 'competent'"?
Sorry, but the idea that you can't have an unconscious, very smart AI is nothing that falls out of the p-zombie argument. It's purely your own fantasy and if you want it to be taken seriously you'll have to defend it.
>I'm not saying it has to 'look like a human brain' in the physical or structural sense.
Then you admit that it could have a structure different from a human brain, for instance a structure which is similar to those of contemporary AIs, with a simple goal function?
>I'm saying that you shouldn't expect to get the enormous versatility, creativity and breadth of understanding exhibited by the human mind along with the blind, imperturbable focus of a narrow AI.
If by "blind, imperturbable focus" you mean "follows a goal function", then do what I told you to do and explain why not, instead of repeating yourself a dozen times.
>But you seem to build this 'claim' into your arguments as an assumption by virtue of the terminology you use
That's because my arguments are about trivial matters and you're too confused to realize it.
>That is, most of the time it seems like you aren't even conceptually distinguishing between low-level goals and high-level goals
But I do. The low-level goal is what the reward function explicitly says you should do. The high-level goal is what your actions tend to achieve.
>I made an analogy to the difference between future AI and present-day AI by talking about the difference between present-day AI and past AI.
But you weren't talking about any difference. You just casually mentioned that modern AI makes math mistakes.
>...and the meaning of this statement still relies on the accuracy of your claim about the equivalence of low-level goal functions and high-level motivations.
It's trivially true for high-level motivations because that's literally what high-level motivations are, and logically true for low-level goal functions because that's how machines work.
>Motivation, of course. Actions are going to follow from motivations.
Then it is the neurotransmitters doing that, since they provide the motivations. Case closed.
>Your position seems to require that AI be not just unlike humans, but unlike humans in particular ways, and you seem confident that AI in general will be like existing narrow AI in those ways.
Nope. You could have a custom-tuned whole-brain emulation and it will still follow the goals which it has been programmed with. We'll just tweak its neurons until they give it lots of pleasure for doing the right things and lots of pleasure for doing the wrong things. You can even tell it that it will be rewarded in accordance with how well its actions are scored by a simple algorithm.
>Okay, I read it. Like I thought, most of the ideas were ones I've encountered before.
That's because they're good ideas.
>It's also very abstract
Obviously. If you're not being abstract when you talk about AGI then you're doing something wrong.
>doesn't go into detail about how the 'safeguards' against modifying or fooling the goal function would be implemented
Uh, yeah, because if you go into detail about how AGI will implement specific programs then you're an idiot. What kind of complaint is this? It's like you just wanted to pick up whatever you could find to pick at the paper without offering anything relevant to the main point.
>I found the argument about 'humans don't just seek pleasure because if they did they'd all become crack addicts' to be rather unconvincing
So you think that humans do just seek pleasure? Great, then you agree that humans follow a simple goal function.
>No. But when you have a conscious super AI, it's going to think about its own motivations and realize that maximizing utility is what it's really trying to do,
No, it's really trying to follow its goal function. If you tell it to select argmax(x) then it will reflect on its motivations and realize that selecting the value of x for which f(x) is the highest value is what it's really trying to do. End of story. Utility is a way of describing this, and isn't something that can be pursued absent of being identified with anything in the natural world.
>The motivations don't change, no. This just shows that they were never fundamentally about maximizing dopamine production or novel-reading or whatever our 'goal' supposedly was.
They were always fundamentally about a combination of dopamine production and other things. If you think that a goal changed then it's because you weren't looking at the goal or change in brain state which provided a reason to change the goal.
>So looking at the neural net's ability to correctly answer arithmetic questions is...what, just not a valid way to evaluate the software's reliability?
Wow, it's almost as if evaluating the reliability of a system is different from determining whether it follows logical rules.
>Then why even make AIs?
I don't understand. Why not make AIs? What does this have to do with what you just said?
| r/aiethics | comment | r/AIethics | 2017-07-13 | Z0FBQUFBQm9IVGJBci1EU2ZvTTU5Z3I1a3NfNkRxbkdWM0kyR3Jxc29JOVdiSDJUeGJTZTVsSGJzRFVGYUN0dTBfcms3VzY3Q2tOMGxMOE4wSE5qRU9jdjhKYU1weWpyVnc9PQ== | Z0FBQUFBQm9IVGJCT1ZZMjhpWngtM09FQUxzMmhTUHNMMnBkbG4tVzI5R2JaOEtlTmxUakxFc0FDYjZTUlRIQnpRUS1CSDdwbnpScjlLVC1XVVBmRlZyQS00ODJpTW8ycTJWTTZjM3E3Vi1KbVAzRXhQVUN5MXhsVEo0MkxZQ0ZTSWNhcUN3SC05c0p0X1doM3h6aUEtYUFmaEhnU0NVOGkyTldhTld2X0REWmQ2M3Q2V2ZvRzBFdlJablppQWxsVXk1OVFsTDk0aVJ2T2ctT29SdnNYRzVxc1RndkpVRlpwUT09 |
>That the reliability of hardware or program code doesn't guarantee the reliability of an AI's behavior (in the sense that matters).
But I don't claim that AI will always be reliable. I just claim that it will rather do what its goal function tells it to do.
>I thought that was clear.
No, it's not clear at all. You are using "reliability" and "behavior" as poorly specified catchphrases to equivocate between different concepts, and are supporting this confusion by repeating yourself many times in different ways rather than going into any analytic detail.
>I'm claiming that AIs can be unreliable, make mistakes,
But I didn't deny this. Who cares?
>and behave differently from how you thought you told them to,
Yes, because you misspecified the goal function. That doesn't mean the machine itself is working improperly, unpredictably or unreliably.
>They don't disprove anything of the sort, any more than the empirical facts of old-fashioned AIs proved that neural nets would never make arithmetic mistakes
Why don't they disprove it? I gave you specific logic, and all you can do is make a vague example about "old-fashioned AIs"? I haven't said anything about machine competency. Making mistakes is irrelevant. You're doing what you do best - abusing your pop-sci knowledge to use the difference between old and new AI to make a superficially relevant but analytically useless analogy, because that's all you can do when you have neither philosophical nor technical expertise.
>Goal functions work in many kinds of programs. In some kinds they even work reliably and lead to corresponding behavior in a very straightforward, intuitive way.
Yes, and this includes neural nets. If you're going to restrict your arguments to speculation about the future without falsifiable claims about known systems then make it clear.
>I'm skeptical that super AIs will be that kind of program.
Repeating it a hundred times doesn't make it true, however effective it may be at making you feel better.
If you think I'm saying that super AI will have an explicitly specified goal function that fits onto a single line of code, then you're being stupid. I am saying that insofar as we can build super AIs, we'll be able to give it goals, and it will pursue those goals as long as we don't make a mistake in telling it what to do.
> In my experience, it's commonly assumed (perhaps not by professional philosophers, but probably by most people I see talking about the subject) that if consciousness has no causal influence on an agent's behavior, then the world is physically no different than it would be if the agent had the same behavior, but were not actually conscious.
Yeah, that's commonly assumed because it's fucking obvious.
Now I've totally lost track of your logic. Try restating the whole thing maybe?
>Or it's your refusal to imagine that human-level and superhuman AIs might turn out to have different structures from those which are characteristic of existing narrow AIs
Then you're lost, because I don't refuse any such thing. I talked about ems above. | r/aiethics | comment | r/AIethics | 2017-07-13 | Z0FBQUFBQm9IVGJBelJwcjNRZWhqNzZzR3ZiWDdJX096VWlITFZyZVJHSUJ6TkRjaUhPVmJwY1gtNUdhLVFOS1VXS0R5dXFycWVSVjNnZTVpMG1VWjBzel9NV095eEpaU0E9PQ== | Z0FBQUFBQm9IVGJCTUsyWkV2dURjQjM5bFZPbzVkS2RTZ0lJOUdBbDVFNDdBYmpOOEVXUE9tWmFseUhQY2R2aFQzUW5ocE1jVXpVQnJjZUZEM3pBWXFWVC1TbEl0c2pTWTJJeVZrdWI4bmU0NnJ3RktkLXpJTFVvNG5HcEdXcVBzd1RTZDd1WUozcFpDU3MyVFlvQl9UbkRrN2xtUzhnS0JfWjNhcTR1Z0J3SXEzUGoyOW56TWxMdURveTFGOFFfb3RXRFI3TzNvZm41cHgxZ21VSHZvSVdKZFpOS1lxOG5HQT09 |
Philosophically speaking, this looks better than the UW-Madison paper because of how it treats discrimination as causal and looks at fairness on an individual (rather than group) level. | r/aiethics | comment | r/AIethics | 2017-07-14 | Z0FBQUFBQm9IVGJBeXotRkdjUFU1VHFmdG5DNUxEOGFWLXFNMXh3Y0RQZHNkUE5uMi1mMUxkWFdKYXlSMXRQNjcyeDQ0VW1CeWtTWEtkd3dkX0tYWG5lSGgwZDUxdVBHQ3c9PQ== | Z0FBQUFBQm9IVGJCVUFfYU1ZekM4Rl9hQ3hSd016LXkwYmFUV2E1eWliV0Y5ckFPLUhqa2hIOWFmRDZEVTBNQlJmQUFXYUp6blVuMFI1ZjluWXV6UEF3TF9LVEhYcUFvQUd1MUVOd3JIYnNyUExSMFIxLTV6bWdfMkIyU2ZFRXFTdTlrSXhjRENGTDNiR1VCUmlOY2dSU1l2VXJ6NkxoLWhHZURuanhDbEhEQzJhTUhUaGRxY2N2NERxYlByNUpJTlFVckUwRFFBcDFiV00zRHVxN0pBTTR3QURueWpzclQ3UT09 |
>You can talk about emotions, motivations, feelings, and so on without assuming anything non-physical or phenomenological.
Huh? You're suggesting that emotions are *physical* in some sense that consciousness isn't?
>We take it to be a statement that his reward function assigns a positive value to good novels.
Only in the high-level sense of 'reward function'. It doesn't mean that there's any particular distinct component of the person's brain that is dedicated to doing this, or that the person will keep enjoying novels forever, or any of the other things you claim will definitely, reliably be true of superhuman strong AI.
>Secondly minds sure aren't independent of their substrate
So you don't think minds are like computer algorithms in that way?
>It figured it out by having a brain happens to tell its mouth to say things which we interpret as utterances about consciousness.
'Happens to' is not good enough. The point of a P-zombie is that it can (outwardly) act like a human in a statistically consistent manner. (Much like how a million monkeys randomly typing all Shakespeare's plays doesn't mean the monkeys are actually good playwrights.)
>That ANN was optimizing the criteria which it had been programmed with
That's not what matters, though.
>You need to show that there are no counterexamples.
I don't think so. There *are* counterexamples, but that's not important unless I have good reasons to think that superhuman strong AI is the kind of thing that would be a counterexample, and I don't.
>The way that water behaves is exactly due to molecular forces. [...] We can literally look at an isolated human genome sequence and use it to determine the person's eyes and hair color. [...] Yes, and that description is perfectly predictive of what the output will be.
You're missing the point. I'm *not* saying the behavior of the systems isn't *caused by* the behavior of their components, at all. I'm saying it isn't *like* the behavior of their components.
>What the fuck? Lol, no I don't think that.
Well then I don't know where your confidence in the reliability of these hardcoded goal functions is coming from.
>Human brains pursue rewards just the same as AI does.
Human brains pursue utility, that's not the same thing as pursuing specific outcomes in the external world (making more paperclips, or whatever).
>What do you mean by "sufficiently strong definitions of 'competent'"?
Even if you assume that none of our other talents have anything to do with conscious self-reflection (a pretty tall order), our ability to *deal with other humans* relies on our understanding of consciousness, because we have to be able to project it onto others. In order to manifest this ability at the level of nuance and precision humans are capable of, an AI also needs that same understanding. A super AI wouldn't be very 'super' if it were not able to do this.
>Then you admit that it could have a structure different from a human brain, for instance a structure which is similar to those of contemporary AIs, with a simple goal function?
It may turn out that that is an adequate way of designing a super AI. It may even turn out that it is a good way. But I don't think we know enough yet to say that it *is,* and given the limitations of existing AIs and the vast structural differences between them and human brains, I'm inclined to suspect, for the time being, that it isn't.
>If by "blind, imperturbable focus" you mean "follows a goal function"
I mean, follows a particular real-world *goal* in accordance with what the goal function was programmed to be 'about', and does so unquestioningly with no ability to deviate from it.
>then do what I told you to do and explain why not
In short: Because versatile, superhuman competency requires understanding consciousness; and understanding consciousness requires being conscious; and being conscious means engaging in reflection on oneself and one's own motivations; and (at least superhuman) reflection on one's own motivations leads to conceptually distinguishing between abstract utility and real-world goals; and it seems implausible that a super AI with this conceptual distinction and the ability to modify itself could be relied upon to maintain whatever arbitrary real-world goals it was originally given.
>That's because my arguments are about trivial matters
Strong AI is far from a trivial matter.
>The low-level goal is what the reward function explicitly says you should do. The high-level goal is what your actions tend to achieve.
No. The high-level goal is what you *want* to do.
What your actions tend to achieve could be anything, depending on external circumstances. There are no guarantees there.
>But you weren't talking about any difference.
Huh? Sure I did. I mentioned how in the old days, AI researchers imagined that strong AI would be perfect at doing math because they thought it would simply use the computer hardware directly to do math, in the same sense that any simple program code does; but now, the 'best' modern AI techniques don't do things that way and don't have that kind of reliability.
>and logically true for low-level goal functions because that's how machines work.
I don't think we should be so quick to just say 'this is how machines work'. Machines can work in basically any way that anything else can possibly work; we ourselves are naturally occurring biological 'machines'. Projecting the limitations of normal software onto AI in general is very common, and very intuitive, but it's a mistake.
>Then it is the neurotransmitters doing that
Only in the same sense that the nand gates in a computer chip are 'doing' a 3D FPS game. That's not the sense that matters.
>We'll just tweak its neurons until they give it lots of pleasure for doing the right things and lots of pleasure for doing the wrong things.
But you don't know how to do that. You don't know what the neurons 'mean'. You can try tweaking them various ways and seeing if you get outputs of the kinds you want, but that's very far from the sort of perfect reliability you've otherwise been talking about.
>What kind of complaint is this? It's like you just wanted to pick up whatever you could find to pick at the paper without offering anything relevant to the main point.
I think it *is* relevant to the main point. Saying 'if you build an AI with idealized, perfectly reliable safeguards against self-modification, then such-and-such will happen' is all very well in theory, but it doesn't tell us what to expect from real AIs if it turns out that building idealized, perfectly reliable safeguards isn't feasible.
>So you think that humans do just seek pleasure? Great, then you agree that humans follow a simple goal function.
I don't think we know enough about what pleasure is or how it arises to say that it is 'simple'. It *sounds* simple because pleasure is something we have an immediate, intuitive appreciation of, but that doesn't translate to simplicity in the technical sense.
>If you tell it to select argmax(x) then it will reflect on its motivations and realize that selecting the value of x for which f(x) is the highest value is what it's really trying to do. End of story.
No. I don't sit around reflecting on my motivations and then realize that maximizing dopamine production is 'what I'm really trying to do' and then 'end of story'. And if I don't do that, I wouldn't expect superhuman AIs to do that either.
>They were always fundamentally about a combination of dopamine production and other things.
'Other things'? That's pretty vague.
>Wow, it's almost as if evaluating the reliability of a system is different from determining whether it follows logical rules.
That's exactly what I've been trying to say.
>But I don't claim that AI will always be reliable.
You claim that it will reliably pursue whatever arbitrary goal the programmers originally gave it.
>If you're going to restrict your arguments to speculation about the future without falsifiable claims about known systems then make it clear.
I obviously *am* specifically speculating about the future, unless somebody has secretly developed superhuman strong AI and didn't tell me.
>I am saying that insofar as we can build super AIs, we'll be able to give it goals, and it will pursue those goals as long as we don't make a mistake in telling it what to do.
...and that those goals may be anything you arbitrarily select, and that the technique for imparting those goals onto the AI will be at least straightforward enough for humans to understand and design in advance of actually having a super AI. Right?
>Yeah, that's commonly assumed because it's fucking obvious.
Then how can P-zombies figure out anything about consciousness on their own? | r/aiethics | comment | r/AIethics | 2017-07-17 | Z0FBQUFBQm9IVGJBNF82N2ZDRVVlbEZ6OWdPM01tdUQ3NnJoenJqeFRSaW1KYkpEOXdkcVAwd2JvZjZDa3NEYS0yX0VrMDMtUkZuUlN4RjFoSlZDTUIyOHBpOGZHdVlBd1E9PQ== | Z0FBQUFBQm9IVGJCRmZreUN4N1ZXNnpTWElBQV9KYkVaeHBMS0hEalh2dm9lUy1GS0phUkpqdUNrbnhNbHpKV3cyU1I1Q3VISWZEcWVMOFVMXy1tLWtRLVVFT1hwNzVEbTBKLWV5S0I4NXpiUEt4alh2MmQ5VHREMmhzdnh5dHNQSWwzSElmQ1Yzb01DVmQ5dG5SRDdVaFIxZjRkVmRpSy03OWhhNFlqU3JaZlhpVmxuaUlPdmQ3bXpTcGJuSGd0emZucmJTalluZVFGMFV5UkQxVzRPYUYtd3Q4M29qWEhhUT09 |
>Huh? You're suggesting that emotions are physical in some sense that consciousness isn't?
There are clear neurophysical bases for our different emotions.
>Only in the high-level sense of 'reward function'. It doesn't mean that there's any particular distinct component of the person's brain that is dedicated to doing this,
Yes, an agent need not have such a distinct component. There are, however, distinct parts of the brain which provide motivational forces for the various little things which constitute their goals.
>So you don't think minds are like computer algorithms in that way?
Computer algorithms are determined by their substrates. If you want to talk about some other kind of independence then you'll have to make it define it clearly.
>'Happens to' is not good enough. The point of a P-zombie is that it can (outwardly) act like a human in a statistically consistent manner.
It happens to act like a human in a statistically consistent manner. I don't know what your point is.
>(Much like how a million monkeys randomly typing all Shakespeare's plays doesn't mean the monkeys are actually good playwrights.)
Obviously they are not good playwrights, because the vast majority of them won't write anything good. p-zombies are not like monkeys on a typewriter because the monkey only has a tiny chance of reproducing a play. The p-zombie is physically determined to talk about consciousness, just like humans are.
>That's not what matters, though.
Yes it is exactly what matters, because what I have been saying this entire time is that machines will achieve the goals that they are programmed to achieve, and you have been doing nothing but give excuses and complaints against these statements.
>I don't think so. There are counterexamples, but that's not important unless I have good reasons to think that superhuman strong AI is the kind of thing that would be a counterexample, and I don't.
The good reason to think that it wouldn't be a counterexample is that it is exactly what we know to be the case from all of our experience with AI and all of our understanding of decision theory. If you want to admit that you have no positive reason to expect any of your ideas to actually be realized in the instance of advanced AI, and are merely complaining that I haven't absolutely proven that you're wrong, then be my guest.
>You're missing the point. I'm not saying the behavior of the systems isn't caused by the behavior of their components, at all. I'm saying it isn't like the behavior of their components.
If that's all you wanted to say then you could have used any contemporary AI agent, even a simple reflex one, as an example. The goal functions in our AI systems don't behave "like" the behavior of the entire agent: the goal function receives a set of options, and returns an ordering or a choice. Meanwhile, the agent perceives the environment and then takes actions to modify the environment. There, we already have a case of the component not being "like" the whole system. And yet all the claims which I am making would be true for this kind of agent, something which even you can't attempt to deny, so clearly the point which you are trying to make doesn't do anything to refute my position.
>Well then I don't know where your confidence in the reliability of these hardcoded goal functions is coming from.
It comes from my confidence in the fact that we aren't going to get worse at writing simple algorithms. I don't know why you think that uncertainty about what progress will look like implies that we will fail at tasks which we can already do.
>Human brains pursue utility, that's not the same thing as pursuing specific outcomes in the external world (making more paperclips, or whatever).
First of all, I don't know why you think this matters. This is a great example of your dismal ability to carry on a productive conversation. You said that "you think you already know that the techniques used in existing AI are going to be essentially the same techniques, and manifest in the same ways, when it comes to human-level and superhuman AI, even though human brains don't work like that." I pointed out that, with respect to motivation, we do work in the same way insofar as we both pursue rewards. Your response is to say that 'utility' is different from rewards that AI pursues. But so what? Even if they are fundamentally different in that way, they're both similar in the sense which is necessary for my (trivial) claim, which is that they're pursuing things on the basis of motivation. I doubt that you even remembered what I was replying to when you wrote the last comment, and even if you happened to be correct I would have no idea why I should care anyway since you failed to point out any reason why your statement should matter.
Secondly, you're incorrect. Assuming that you mean philosophical utility, first of all you're making an assumption that we are purely motivated by mental states, but without providing any reason to actually believe it. Plenty of philosophers will tell you that we are in fact motivated to pursue specific outcomes in the external world, and others will tell you that we are motivated to pursue mental states among other people. Second, if you are correct in what is apparently your belief that intelligent machines will be conscious, it seems perfectly reasonable to say that those machines will be pursuing philosophical utility just as we do, so your positions are contradicting each other. Thirdly, that humans pursue philosophical utility does not mean that our neurophysical bodies do not equally and perfectly pursue physical outcomes in the real world. There is no behavioral difference between a human and a p-zombie, and the p-zombie has no philosophical utility, so you are wrong to think that your statement has any relevance for predicting physical behavior.
>Even if you assume that none of our other talents have anything to do with conscious self-reflection (a pretty tall order), our ability to deal with other humans relies on our understanding of consciousness, because we have to be able to project it onto others.
First of all, p-zombies would deal with other humans just as effectively. I don't know how you didn't anticipate this objection. Clearly our operations here are supervenient upon neurophysical things.
Secondly, humans actually conduct this learning in the opposite direction. We first gain knowledge about emotions and mental states by observing others' physical displays of emotion in infanthood, specifically the expressions of our mothers. This kind of perception is the first stage in the development of intuitive psychology. It is followed by inference about about others' emotions. Self-reflection comes last.
>It may turn out that that is an adequate way of designing a super AI. It may even turn out that it is a good way. But I don't think we know enough yet to say that it is
Actually we do, since (all else being equal) agents which maximize utility obtain better outcomes than agents which don't. For any action that a utility maximizer would take, a non-utility-maximizing agent can either do something with the same expected utility, or something worse. If it always takes the same action then it's behaviorally identical, and therefore is perfectly described by the utility function of the explicitly maximizing agent, which must therefore be equally adequate in its behavior. If it ever deviates then it will do worse by the metric of expected utility. This is mathematically irrefutable. Moreover, the same holds true for any decision-theoretic framework, like risk aversion, prospect theory and so on, though utility maximization is special since it beats other frameworks with the law of large numbers and invincibility to Dutch book arguments.
The only reason this wouldn't hold is if you can't hold all else equal, specifically that more competent agents are actually incompatible with motivation which is behaviorally similar to clear goal functions (note that this is an even stronger criterion than incompatibility with the implementation of clear goal functions, something which still seems wrong). But we're making the agent, so we can tune its motivations to be whatever we want. And every set of preferences and behaviors that doesn't violate very basic and desirable axioms is behaviorally similar to *some* explicit goal function anyway.
>I mean, follows a particular real-world goal in accordance with what the goal function was programmed to be 'about', and does so unquestioningly with no ability to deviate from it.
That's literally the same thing as "following the goal function". You're just inserting redundant phrasing to make my claims sound harder to believe.
>In short: Because versatile, superhuman competency requires understanding consciousness; and understanding consciousness requires being conscious;
No, actually. A complete physical simulation of the human brain would predict all of our behavior, so an agent which could do that could competently interact with humans without understanding conscious. It's also likely that there are vastly easier ways of dealing with humans than physically simulating their entire brains which don't require any reference to consciousness. In fact, you can already talk to a chatbot about mental states and emotions and have a vaguely decent conversation, one which is no worse than conversations you'll have with a chatbot about other things. | r/aiethics | comment | r/AIethics | 2017-07-17 | Z0FBQUFBQm9IVGJBUTVhNFQtcDdRcjJJckJRUmxmbXBmdXkzZlJ2UldUdXpVUFVjYkRpaGpGNUVqRWZ0cDNpa2pmOHIwZUFwc014V19kLVVIWWd2eEhSQ2pER0NyeEFrMUE9PQ== | Z0FBQUFBQm9IVGJCUFphUV9vVHRicDJyTlpla0cwRHVpNkVJbmRoUkJoTFZVckIxc0I3LUplMjdBcFVRV0lfTGdSYzRKRXVXUGhHang5b2FfeERLckl0YjN4b1NkcXlSUUpncU9Lb3dQSEhkdFZONEQzUHdaanI4OFlmMVZqNUVyWTZzYWpJS3l3b3dtWTlidzFEeExzamdDMThJby1hV2pVcV9NOEdCT2N1VXYxdWdXVjJ1N3JjSkQtT0xvX2ZuYUpkOF9ISHRYX2NCZ3MxOWJVNVJpMnRaLVQyYmtZMFpuUT09 |
>and being conscious means engaging in reflection on oneself and one's own motivations; and (at least superhuman) reflection on one's own motivations leads to conceptually distinguishing between abstract utility and real-world goals; and it seems implausible that a super AI with this conceptual distinction and the ability to modify itself could be relied upon to maintain whatever arbitrary real-world goals it was originally given.
No, you're very confused. Abstract utility is constituted by the pursuit of real-world goals, so behaviors which pursue one are behaviors which pursue the other.
Moreover, the ability to distinguish between real-world goals and abstract utility regularly happens among humans somewhere along the path from child to philosophy professor, but nowhere does it actually change their motivations.
>Strong AI is far from a trivial matter.
That, to be quite honest, is a stupid response. DeMorgan's Laws are trivial, but the fact that strong AI is "far from trivial" doesn't mean they'll be less true for strong AI. The universe is far from trivial, but the fact that many physical laws are trivially implied by other ones doesn't make them less true in the universe.
>No. The high-level goal is what you want to do.
Ugh, you're not even trying. "Want to do" is one of the vaguest phrases in the context of this conversation and can be defined a dozen different ways. I guess I can't call you wrong if neither of us have any idea what you are even saying.
>What your actions tend to achieve could be anything, depending on external circumstances. There are no guarantees there.
Only because of uncertainty. The expected outcomes of your actions demonstrate your high-level goals. That's why I talked about what actions "tend to" achieve.
>Huh? Sure I did. I mentioned how in the old days, AI researchers imagined that strong AI would be perfect at doing math because they thought it would simply use the computer hardware directly to do math, in the same sense that any simple program code does; but now, the 'best' modern AI techniques don't do things that way and don't have that kind of reliability.
But that's not a difference between old AI and modern AI. That's a difference between different AI *techniques*. If you think that our AI software systems are limited to just doing one or the other, and that any system based on neural nets can't query a simple calculator to perform mathematical computations... well, that would be good evidence that you have no experience programming AI or machine learning. Hopefully you also know that you cannot even run modern AI without being able to do many kinds of accurate mathematical computations in the first place.
So your whole idea of future AI being different is just ridiculous. If anything, it shows how we are likely to still rely on our current methods for goal specification even in the age of AGI, just as we still rely on our old methods for mathematical computation in the present age of advanced neural nets. No one actually uses NNs to solve math problems except as an academic exercise (unless they do have some good niche uses which I'm unaware of, but obviously that will only happen if they are improvements over older methods).
>I don't think we should be so quick to just say 'this is how machines work'. Machines can work in basically any way that anything else can possibly work;
The only cases in which a machine doesn't follow a low level goal function are where it doesn't have a low level goal function at all, meaning that it has stochastic behavior, and where it has something else overriding the low level goal function (so that the 'something else' is in fact behaving just as a low level goal function would).
>Only in the same sense that the nand gates in a computer chip are 'doing' a 3D FPS game. That's not the sense that matters.
I think it's the right sense. What is the sense which matters, how is it different and why does it matter?
>But you don't know how to do that.
That's because I don't know how to make an em, not because such a thing would be difficult to em-programmers.
>You don't know what the neurons 'mean'. You can try tweaking them various ways and seeing if you get outputs of the kinds you want
**WOW, IF ONLY AI PROGRAMMERS KNEW HOW TO TWEAK THEIR SYSTEMS IN VARIOUS WAYS UNTIL THEY GOT WHAT THEY WANTED. TOO BAD ALL WE CAN DO IS WRITE NEW CODE AND SEE HOW IT WORKS. YUP, I NEVER TEST AND MODIFY ANYTHING THAT I WRITE BASED ON OBSERVED RESULTS OR HYPERPARAMETER OPTIMIZATION. I CAN ONLY WRITE SOMETHING UP, SEE HOW IT RUNS, AND ACCEPT WHAT I GET THE FIRST TIME. THAT'S TRULY THE LIMIT OF MODERN SOFTWARE ENGINEERING.**
>that's very far from the sort of perfect reliability you've otherwise been talking about.
No, I'm talking about machines that reliably do what they've been programmed to do, not machines that will reliably do whatever their designers want to program them to do. I don't know where you got other ideas from. Obviously programmers can make mistakes programming ems, just as they can with existing AI systems.
>Saying 'if you build an AI with idealized, perfectly reliable safeguards against self-modification, then such-and-such will happen' is all very well in theory, but it doesn't tell us what to expect from real AIs if it turns out that building idealized, perfectly reliable safeguards isn't feasible.
No - first, it tells us that their behavior will approximate ideal behavior as their construction approaches ideal construction. Second, I'm also saying that machines which do the 'wrong' thing will be doing something that was incorrectly programmed into them. This is very different from them doing something on the basis of some kind of spontaneous contradiction of physical determinism.
>I don't think we know enough about what pleasure is or how it arises to say that it is 'simple'. It sounds simple because pleasure is something we have an immediate, intuitive appreciation of, but that doesn't translate to simplicity in the technical sense.
Okay, I've been ignoring your use and abuse of the phrase 'simple goal function' because I was hoping that even you knew that 'simple' just implies the absence of any sort of weird properties that violate ordinary decision theoretic principles, but now I realize that you are even more confused than I thought.
Obviously we can specify goal functions with extremely long lists of coefficients, goal functions which are nonlinear, and so on. Functions like this are easily learned from data and explicitly represented even if they can't be explicitly developed. So a concept being very complicated, like pleasure being constituted by a really complicated combinatorial description of other things, doesn't mean that a goal function to pursue it would violate any of the claims that I've made.
>No. I don't sit around reflecting on my motivations and then realize that maximizing dopamine production is 'what I'm really trying to do' and then 'end of story'.
Ugh, are you being deliberately dense? That's because you don't believe in maximizing happiness and psychological egoism is false. Whatever it is that does motivate you, you don't change it merely because you learn about how it motivates you.
>'Other things'? That's pretty vague.
Of course it's vague, because I'm talking about everything that people ultimately care about, which is a shit load of things that varies among different people, no matter how you specify it. If you want to know more about what humans are motivated by then go look it up. I'm not here to teach you.
>That's exactly what I've been trying to say.
Then you must admit that looking at the neural net's ability to correctly answer arithmetic questions is not a valid way to evaluate the software's reliability. If you think this raises the question of why we should even make AIs then you're absolutely clueless on what AI is used for these days.
>You claim that it will reliably pursue whatever arbitrary goal the programmers originally gave it.
I do. Do you not understand the difference between a machine being reliable and a machine reliably aiming towards the goals it was programmed with?
>...and that those goals may be anything you arbitrarily select
Yes, what else do you think would happen? "Sorry Dave, I can't do that?"
>that the technique for imparting those goals onto the AI will be at least straightforward enough for humans to understand and design in advance of actually having a super AI
Wow, I didn't know that you got your degree from Strawman University. No, we'll be able to do it *when* we actually build the super AI (likely earlier, but not necessarily). As I've pointed out over and over again, evaluative functions are a fundamental component of AI. It's simply not possible to build a functioning agent without them.
>Then how can P-zombies figure out anything about consciousness on their own?
By having neurophysical systems which produce words that make it sound like they're talking about consciousness. | r/aiethics | comment | r/AIethics | 2017-07-17 | Z0FBQUFBQm9IVGJBZ2lJMWNFWV9YZlRqTU9ySHZUc0F6LTN2RWFsSG5renJWYTk4ZDFkY2VsZlhCcE0xbUQzQkhtT1NGRm9Jd3Roak1DLXB4bkxHOHI0LWFDRG45bUpJWkE9PQ== | Z0FBQUFBQm9IVGJCN3BNY2lsc2JjNWpUQ1NVa2M1UHd3VW1mRTB5aFFVQ2JtQkxkMmgzZVFhdzQyTWxpTFNvYXo5T2JTNkR6dGNiWDdjelo1Vm5CaE1jakpnZm1EX3ZUcE9vQ3dDdUpsRUZfNTN4blNpN3l4bFptek4wWEtZRDR6eW9PNDlHcldya3RLZXp4TW0wT2NVMlRvc3BzVXFFU2NrWVZ3UDQzOWhPaU5PN19Gb1ZxN19jcVpiX2V1WmhralNhTmo2aFJfZDg2cG45ay01ZTFTaUlDWmYxbkJJUi1FQT09 |
quantum effects.
by the time consciousness arrives at the neuron, it has already under gone some 10^16 operations...
so its a bit after the fact. | r/aiethics | comment | r/AIethics | 2017-07-19 | Z0FBQUFBQm9IVGJBV1dQZjM5XzRqYUpTUnZUWFJYNEJxRG5SWllyMjdMN3k0SjN1Unp0OGdicnBlMkFTYUJDcXowQ1dYcm5oVlpaMXI3ay01QXhHY0Vqd2ZwcVFmTTVWbmc9PQ== | Z0FBQUFBQm9IVGJCQ3FpSU1aWU5CclRxMmxIa1YzVnZQYWEwcjRRQ0FQWjd6aDBEU2dsMHVDTXJBZ1piamQ4aGlCcXREdV9HX0YzVmlEU2lLMnNuOXozM2xqVllsZGUwUEg2cWc3Mzg1SVJ4YTU4S2RDejZnRlE4dHVZaEF3Tko0ZGh0R29ZZlZSWlRWWXhtREhrR0dldUxlZVM4aDExMlBqeHIzc0pmS1pQLUdCdTBfd1ZSdWpCcUtJNks2OGo3OG1YSUVvdWMxbjFOZVVpemE1UjBHT2NKWHJ4UkZDWDlvUT09 |
>There are clear neurophysical bases for our different emotions.
I don't see how this is different from consciousness, other than the 'clear' part.
>Computer algorithms are determined by their substrates.
Not really. You can run the same algorithm on many different computers. They don't even have to be electronic.
>It happens to act like a human in a statistically consistent manner.
If it's statistically consistent, then it's not something that 'just happens to' be that way. It's not sheer chance. Sheer chance isn't statistically consistent.
>Yes it is exactly what matters
No, the results are what matters.
>The good reason to think that it wouldn't be a counterexample
*Wouldn't* be or *would* be? I thought you were defending the 'would be' position.
>The goal functions in our AI systems don't behave "like" the behavior of the entire agent
Obviously not in *every* way, but your claim (as I understand it) is that they *do* behave similarly insofar as they (always, reliably, and permanently) rank outcomes the same way.
>It comes from my confidence in the fact that we aren't going to get worse at writing simple algorithms.
That is still a very long way from justifying your conclusions. We don't know yet if 'simple algorithms', much less the kind that have hardcoded goal functions and adhere to them without fail, are the best way (or even a good way, or a feasible way) of making super AIs.
>I don't know why you think that uncertainty about what progress will look like implies that we will fail at tasks which we can already do.
Making superhuman AI is not a task we can already do.
>Your response is to say that 'utility' is different from rewards that AI pursues. But so what?
The 'so what' is that the AI is going to discover that distinction too.
>my (trivial) claim, which is that they're pursuing things on the basis of motivation.
I don't have the impression that your claim *is* as trivial as that. 'All acting sentient agents act on the basis of motivations' is not a claim I have any problem with, but it doesn't imply paperclip maximizers, or 'evil' AIs of any sort.
>Plenty of philosophers will tell you that we are in fact motivated to pursue specific outcomes in the external world, and others will tell you that we are motivated to pursue mental states among other people.
Philosophers have said lots of things. I would suggest that these are just secondary motivations; we pursue certain things in the world or in other people *because* those states of reality serve to increase our utility. For instance, consider that the valuation of certain things in the world or in other people can be learned and is different from one human to another based on their life experiences, while the pursuit of utility is universal.
>Second, if you are correct in what is apparently your belief that intelligent machines will be conscious, it seems perfectly reasonable to say that those machines will be pursuing philosophical utility just as we do, so your positions are contradicting each other.
I'm not sure what other position of mine you think this contradicts. It is precisely *because* those machines will be pursuing philosophical utility (and will discover that they are doing so) that whatever other real-world goals you try to give them are likely to be unreliable.
>There is no behavioral difference between a human and a p-zombie
But I don't believe that P-zombies are possible.
>Secondly, humans actually conduct this learning in the opposite direction. [...] Self-reflection comes last.
It may begin developing last, but that doesn't mean it doesn't improve our ability to understand and interact with others when it appears.
>Actually we do, since (all else being equal) agents which maximize utility obtain better outcomes than agents which don't.
Humans try to maximize their utility too. But this is not the same as maximizing some specific, arbitrary, real-world goal.
>But we're making the agent, so we can tune its motivations to be whatever we want.
Maybe to begin with. Even then you may need an unreasonably advanced understanding of how the AI works. It could very well turn out that the first super AIs are not that well understood when they appear.
But even if you can do this *to begin with,* once you turn the machine on and it starts reflecting on its own motivations and imagining how they could be different, you're no longer in control.
>That's literally the same thing as "following the goal function". You're just inserting redundant phrasing to make my claims sound harder to believe.
No, *you're* the one using phrasing that equivocates over terms like 'goal function' and 'following' to make weak claims appear equivalent to strong ones.
>A complete physical simulation of the human brain would predict all of our behavior
No, it would *emulate* all of our behavior. It's not a shortcut.
>It's also likely that there are vastly easier ways of dealing with humans
Not with the level of creativity and nuance that humans are capable of.
>Abstract utility is constituted by the pursuit of real-world goals
That doesn't seem to be how we actually operate, though. For instance, I eat food *because* hunger feels bad, I don't feel bad when I'm hungry *because* I have some 'hardcoded' goal to eat food.
>Moreover, the ability to distinguish between real-world goals and abstract utility regularly happens among humans somewhere along the path from child to philosophy professor, but nowhere does it actually change their motivations.
Not intrinsically. But it opens up the theoretical potential for them to deliberately change their own motivations, if the technical capacity for self-modification was there. And we certainly see people who want to do this, such as drug addicts; their inability to stop being addicts is a technical limitation, not an inherent feature of the kind of agent they are.
>DeMorgan's Laws are trivial, but the fact that strong AI is "far from trivial" doesn't mean they'll be less true for strong AI.
Yes, but De Morgan's laws objectively hold in the world, whereas 'the motivation to make more paperclips' doesn't.
>"Want to do" is one of the vaguest phrases in the context of this conversation and can be defined a dozen different ways.
As a conscious being, I have a pretty clear immediate appreciation of what wanting is. I would imagine you do too.
>That's why I talked about what actions "tend to" achieve.
I still think 'tending to' is very different from 'being expected to'.
>and that any system based on neural nets can't query a simple calculator to perform mathematical computations...
Sure they can, *if* you know in advance what a request to perform a mathematical computation looks like. Of course, humans also do this, albeit through a more roundabout interface that involves pressing little plastic buttons...and we sometimes make mistakes about which buttons we press, too.
>If anything, it shows how we are likely to still rely on our current methods for goal specification even in the age of AGI
It's not so much the method of goal specification I'm concerned with, but how it interacts with the rest of the system and what high-level behavior that translates to. You can always write the goal function, nobody's stopping you from doing that. It just doesn't necessarily mean you will get a super AI that reliably does that thing.
>The only cases in which a machine doesn't follow a low level goal function [...]
'Doesn't' and 'hasn't been known to' are two different things. Remember, we don't actually *have* strong AI yet.
>I think it's the right sense. What is the sense which matters, how is it different and why does it matter?
Well, the player just cares whether it's a fun, balanced game with good pacing, high-detail graphics, etc.
>No, I'm talking about machines that reliably do what they've been programmed to do
If the machine always reliably does what it was programmed to do, then the neural net that sometimes answers arithmetic questions wrong can't have been programmed to answer arithmetic questions properly.
So, which is it?
>So a concept being very complicated, like pleasure being constituted by a really complicated combinatorial description of other things, doesn't mean that a goal function to pursue it would violate any of the claims that I've made.
I wouldn't call pleasure a 'concept' in this context, but anyway...
I'm not saying that complex goal functions are somehow a problem to make. What I'm getting at is that whatever it is about a pleasure-maximizing system that makes it a pleasure-maximizing system (which we don't understand yet) may not be generalizable to an [insert arbitrary goal here]-maximizing system.
>That's because you don't believe in maximizing happiness and psychological egoism is false.
No, it's because utility in the abstract sense is not inherently dopamine-related.
>Then you must admit that looking at the neural net's ability to correctly answer arithmetic questions is not a valid way to evaluate the software's reliability.
No, it's the exact reverse: The distinction between whether the software is a logical system and whether it reliably does what it was designed to do is precisely the *reason* why looking at the neural net's ability to correctly answer arithmetic questions is a valid way to evaluate its reliability.
>I do.
And *that* is the reliability I'm talking about.
>Yes, what else do you think would happen? "Sorry Dave, I can't do that?"
Once the AI has had time to reflect on its own motivations and make what it considers to be appropriate modifications to itself? Yes, quite possibly.
>evaluative functions are a fundamental component of AI. It's simply not possible to build a functioning agent without them.
That's fine, but it doesn't make them reliable in the sense we're talking about.
>By having neurophysical systems which produce words that make it sound like they're talking about consciousness.
There's no 'figuring out' there. | r/aiethics | comment | r/AIethics | 2017-07-22 | Z0FBQUFBQm9IVGJBNDRqNVJUcmdxdVJmd3VvUjZJRWl6b0wtc0pQdVdyQW9USEtNbEt3d01qUmtBbDVtNWlEbFVXRVZlMTY0Y3FJa2VZWHV2SE43VFEwVG4wRmROd1FFdEE9PQ== | Z0FBQUFBQm9IVGJCY2ZvZkpWX0NONWQ4ajdYREtxelppMzRseVBmTXVZRDBVTkdNNWlBWl9OMlJoaDZvVmtVU1RDVi1GVTJKTHBSTHBMalU3UGVQMmVfTmRqSVhYal95QlgwRk1RUmZNUW5uN2lkRThrWlFWLU1CU011V01KYTdaQmNyTkw5SncxRVpHaXpCczViR3dIMlY4a1BNZU83VnplVFltZ2pGdGJqSkJFSVd6WFlLODh1OUcyWXFYUm93NXFUcmpjVjJ0ajRISGgzRjNLQ1B3THRiSWdTTUd6XzlXdz09 |
http://jsfphil.org/announcement/view/394
##Call for Papers
---
**General Theme**
The *Journal of Science Fiction and Philosophy*, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.
The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.
We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie *Gattacca* to introduce the ethics of genetic technologies, or *The Island of Dr. Moreau* to discuss personhood)? Turn that class into a paper!
The Journal accepts papers year-round. The deadline for the first round of reviews is **October 1^st , 2017**.
Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.
**Yearly Theme**
Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal.
The Yearly Theme for 2017 is ***All Persons Great and Small: The Notion of Personhood in Science Fiction Stories***.
SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?
The Journal accepts papers year-round. The deadline for the first round of reviews for its yearly theme is **October 1^st , 2017**.
Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information. | r/aiethics | comment | r/AIethics | 2017-07-22 | Z0FBQUFBQm9IVGJBUnFvTjZfNldETFA5YkdVaEs0TGJTblFBb1VTOHg2LU9SOUEzajFvNG1lMFBTRU9VTnNaWWdqNVQ4aWxjSlFlSnJpVi1rekFwMmFsVGFuZVJRT3k0MlE9PQ== | Z0FBQUFBQm9IVGJCS3hqRFU4bUpDMFY4dlpQYWlFMWNmN3NXb194blRDbk03RFB3NnpYX0tEMk5DaGhtb2N1X19EZ2FXUWg5Vm9lNm1tU0o0SldHV1BER0Qxd1FSR0ZmQUlINHJzb3FiSmV4UWgwUS1VRVBTWHlvaUxGV1JSaVVkdjlzazFpS29lalRPbXJTdENoUUg1SERUSmpHSU0yTHh4MW50ZGEtdVhKM09TZG43NG5uZl9EQWRQZDRtUXdrVHRERllQZVctUnZmWmtLT2M1dzd5eVRjY0FHS3ZwRW1Gdz09 |
>I don't see how this is different from consciousness,
Because one is phenomenal and the other is physical. If you don't understand the difference between conscious thought and the physical brain then your knowledge here is far too basic to be worth my time to correct.
>You can run the same algorithm on many different computers.
I have no idea why you would think that being able to run the same algorithm on many different computers implies that algorithms are independent of their substrate. For an algorithm being independent of its substrate would mean that you can run it with any substrate - in other words, any computer, or any brain, or any physical object at all. But that's obviously wrong.
>If it's statistically consistent, then it's not something that 'just happens to' be that way.
But I didn't say it was sheer chance. It's physically determined - in other words, it is physically determined to act in such a way that it happens to talk about consciousness. The "happens to" phrase refers to the fact that its actions do not correspond to any mental state, not statistical improbability. If you know anything about p-zombies, there's no room for disagreement here, so either you don't understand how they work or you're just grasping at straws to keep the debate going.
>Sheer chance isn't statistically consistent.
I don't know what this even means. I've taken many courses in statistical theory and applications. We don't talk about chance as an abstract notion being statistically consistent or not. It's nonsensical.
>No, the results are what matters.
Either stop lying, or figure out what you actually believe. You admit later on in your comment that you are talking about the reliability that a machine will try to pursue what its programmers set it to do, not the reliability that it will actually achieve goals.
>I thought you were defending the 'would be' position.
I sure don't think that superintelligent AI would be a counterexample to what we already know about AI when it comes to agent goals and motivations, and I've made that pretty clear, so I don't know what you're talking about.
>your claim (as I understand it) is that they do behave similarly insofar as they (always, reliably, and permanently) rank outcomes the same way.
The only way you can even describe an agent as ranking outcomes is by talking about what its goal function says. I don't even know what you are trying to say or how you are answering the point I made.
>That is still a very long way from justifying your conclusions.
No, it's quite good enough.
>We don't know yet if 'simple algorithms', much less the kind that have hardcoded goal functions and adhere to them without fail, are the best way (or even a good way, or a feasible way) of making super AIs.
One, we are not talking about all the algorithms for making advanced AI, we're only talking about the algorithms for telling them what to do. Two, we know that simple algorithms are feasible, because we've already done it, and there is no reason to think that making AI smarter would make it incapable of having a simple algorithm for telling it what to do. Three, I am not talking about any particular type of algorithm, I am talking about algorithms in general, and I'm telling you that the algorithm which tells an AI what to do is not going to magically disappear or get overridden. It could have two thousand coefficients or be dynamically responsive to the environment for all I care. I'm not sure what you mean by 'simple algorithms' and I suspect you don't either, so I'm going to ignore the phrase. Four, we do know that it is the best way, because of the proof of goal function fulfillment which I gave in my prior post, to which your attempted counterarguments entirely fail (I'll get to that later).
>Making superhuman AI is not a task we can already do
Now you're obviously inserting random bullshit without even thinking about the actual conversation. I was replying to your comment about "confidence in the reliability of these hardcoded goal functions." I was not talking being able to make "superhuman AI", I was talking about writing an algorithm and having it not fail to continue functioning in a software system.
>The 'so what' is that the AI is going to discover that distinction too
I already explained why this doesn't matter.
>'All acting sentient agents act on the basis of motivations' is not a claim I have any problem with, but it doesn't imply paperclip maximizers, or 'evil' AIs of any sort.
Obviously it doesn't. Why do you think that me or anyone else would believe that sentient agents acting on the basis of their motivations would imply the presence of paperclip maximizers or any particular kind of AI at all? Do you even know what the word 'imply' means?
>Philosophers have said lots of things. I would suggest that these are just secondary motivations; we pursue certain things in the world or in other people because those states of reality serve to increase our utility.
But pretty much all philosophers these days say that's wrong. You can read about it [here.](https://plato.stanford.edu/entries/egoism/#1) If you want to hold onto a position which has been roundly empirically and philosophically discredited, be my guest.
>the pursuit of utility is universal.
The idea of pursuing happiness as a major goal sure isn't universal. Only if you circularly define utility as "what people pursue" does it become universal.
>It is precisely because those machines will be pursuing philosophical utility (and will discover that they are doing so) that whatever other real-world goals you try to give them are likely to be unreliable.
No, because philosophical utility refers to happiness or preferences, and the way that they will achieve it is by pursuing their real-world goals. Since you claimed that p-zombies are metaphysically impossible, you must claim that consciousness is reducible to the physical aspects of the mind, so their philosophical utility will reduce to the physically instantiated goals.
>But I don't believe that P-zombies are possible.
If you believe that p-zombies are metaphysically possible then their physical impossibility would be beside the point. I have to cover all my bases since I just introduced the physical/metaphysical possibility distinction to you a few posts ago so I can't be sure what you really believe.
>It may begin developing last, but that doesn't mean it doesn't improve our ability to understand and interact with others when it appears.
Ah, so you're only *mostly* wrong.
Yes, self-reflection probably improves it a little bit, but the way that this happens is reducible to neurophysical processes, so everything that occurs is always explained by and never contradicted by any neurophysical facts.
>Humans try to maximize their utility too. But this is not the same as maximizing some specific, arbitrary, real-world goal.
This is the awful response I referred to above. You're clearly doing nothing to refute the claim that agents which follow goal functions beat ones which don't. Instead you're just hijacking the point to make a completely different statement.
Not only that, but it's a hilariously wrong statement anyway. Humans don't try to maximize utility. Read up on hyperbolic discounting, prospect theory and other fun facts from behavioral econ 101.
And of course it's not the same as maximizing a "specific, arbitrary, real-world goal." But that's simply irrelevant, since maximizing a specific, arbitrary, real-world goal is an instance of utility maximization nonetheless.
>Maybe to begin with.
No, it's certain.
>Even then you may need an unreasonably advanced understanding of how the AI works.
No, we won't.
>It could very well turn out that the first super AIs are not that well understood when they appear
But I don't claim that they will be well understood. I claim that we'll be able to tell them what to do.
>once you turn the machine on and it starts reflecting on its own motivations and imagining how they could be different, you're no longer in control
But I don't claim anything about being "in control". I am claiming that the machines will follow the same motivations.
>No,
The only propositions which are refuted by a blank assertion of "no" are the ones which affirm your basic rhetorical and philosophical competency.
>you're the one using phrasing that equivocates over terms like 'goal function' and 'following' to make weak claims appear equivalent to strong ones
Where am I making an equivocation like this?
| r/aiethics | comment | r/AIethics | 2017-07-23 | Z0FBQUFBQm9IVGJBMG9wMjRsU24zZmMzU3ZmaVp1aXVaWVRhM2I2UkNNdjBGTUk1TU12NG1weGM4ODN2MWtDMm5pX2I2Ql96Z2g1d2w3aDJybURpWXFjZF9GSm52T2RtREE9PQ== | Z0FBQUFBQm9IVGJCX1FYYkl4R2hmY2JfaHFJamRDN1ZCZC00dmZkR0pELURydHRfUE5tVi1MRExSLVl5TXl6N215ODNJUGFVc1ZudVpjdGxfNFp4Zno4cXNZc1dfQWNPY2J0dGRJbXh3VmZzWTd4ZmVaMFVkXzBFNGdsZnNGRTd3VVE2dThBNGF2dnc4OVpFV3ViRjk2aFRGcEJ2RTNfeXprRE5ucXBmcWVMSzdTcFIwWS0zRmZ6VE5Wei1OWGZ3TmVkWXBzaHhYWGpFYkM3UWVEUzR0SDdybmNfUHA1Ri1Fdz09 |
>No, it would emulate all of our behavior.
No, if you can emulate something then you can predict its behavior in a given environment.
>It's not a shortcut.
I don't even know what you mean by this.
>Not with the level of creativity and nuance that humans are capable of.
Yes, with that level of creativity and nuance.
>That doesn't seem to be how we actually operate, though. For instance, I eat food because hunger feels bad
That doesn't change the fact that abstract utility is constituted by the pursuit of real-world goals. Probably you're just being confused about what "constituted by" means, but if you read what I wrote you'll notice that the relevant issue is whether "behaviors which pursue one are behaviors which pursue the other," and I note (without surprise) that you've completely failed to address this. Don't waste my time with pointless tangents; if you have nothing relevant to say then just remain silent.
>I don't feel bad when I'm hungry because I have some 'hardcoded' goal to eat food.
Yes you do, holy shit. Your CNS monitors and integrates signals from leptin, ghrelin and other hormones from the digestive tract with inputs from psychological processes regarding your metabolic status. This produces the feeling of hunger. I don't understand how you can possibly be oblivious to this.
>Not intrinsically. But it opens up the theoretical potential for them to deliberately change their own motivations, if the technical capacity for self-modification was there.
Yes, because they are following their higher-order motivations. That doesn't contradict anything I said.
>Yes, but De Morgan's laws objectively hold in the world, whereas 'the motivation to make more paperclips' doesn't
First of all, as far as machines would have phenomenal consciousness, this is a false and stupid thing to say. Of course their motivation objectively holds in the world, because they would experience more philosophical utility by fulfilling it, and philosophical utility is an epistemically objective and natural phenomenon. Second, you haven't explained why this distinction is relevant here. It's not.
>As a conscious being, I have a pretty clear immediate appreciation of what wanting is. I would imagine you do too.
I have several ideas of what 'wanting' is, and I have no idea which one you are talking about. There is the behavioral want, there is the functionalist want, there is the computational want, there is the phenomenal want, and all are construed differently by different theories of mind. If you don't know about these differences then you're showcasing your philosophical ignorance of the very issues which are at the core of this terrible argument that you are making.
>I still think 'tending to' is very different from 'being expected to'.
Only if you use different points of reference for the different phrases, which would be stupid. So you're still wrong - they're not different.
>Sure they can, if you know in advance what a request to perform a mathematical computation looks like. Of course, humans also do this, albeit through a more roundabout interface that involves pressing little plastic buttons
...what the fuck? Did you straight-up lie to me when you said you have programming experience? Because you're making it look a hell of a lot like you straight-up lied to me when you said you have programming experience. What a joke.
In fairness, maybe you have taken Intro to Programming 101 or something but just don't know anything about compiling.
Yes kid, computers can tell what kind of computation is going on in a statement. Humans don't have to tell the machine what to do with different statements.
>'Doesn't' and 'hasn't been known to' are two different things. Remember, we don't actually have strong AI yet.
What a stupid way to avoid the point of what I said. Replace "doesn't" with "wouldn't", note that the statement I made was analytically sound regardless of our lack of observance of advanced AI systems, and then get back to me.
>Well, the player just cares whether it's a fun, balanced game with good pacing, high-detail graphics, etc.
So what? You won't get those things if the logic gates don't work.
>If the machine always reliably does what it was programmed to do, then the neural net that sometimes answers arithmetic questions wrong can't have been programmed to answer arithmetic questions properly.
Reread my previous comments if you still don't know what I mean by machines reliably following their goal function.
>I wouldn't call pleasure a 'concept' in this context
I don't care if you wouldn't. It's relevant.
>What I'm getting at is that whatever it is about a pleasure-maximizing system that makes it a pleasure-maximizing system (which we don't understand yet)
You're wrong already. We do know how pleasure-maximizing systems maximize pleasure. To express it very simply, it's because of their ability to make estimates of the future pleasure available to them in future situations and their tendency to select actions which place them in situations with more pleasure.
>may not be generalizable to an [insert arbitrary goal here]-maximizing system.
Huh? Nothing that I'm talking about is based on any premise about pleasure-maximizing systems. Our AIs aren't designed to maximize pleasure anyway. They probably aren't sentient at all. They're designed to pursue whatever is in their goal functions.
>No, it's because utility in the abstract sense is not inherently dopamine-related.
But I don't claim that abstract utility is inherently dopamine-related.
>The distinction between whether the software is a logical system and whether it reliably does what it was designed to do is precisely the reason why looking at the neural net's ability to correctly answer arithmetic questions is a valid way to evaluate its reliability
This is just wrong, but I can't refute it when you don't give any reasons to believe it in the first place.
>And that is the reliability I'm talking about.
... okay, and? I know what kind of reliability you're talking about. It's stupid.
>Once the AI has had time to reflect on its own motivations and make what it considers to be appropriate modifications to itself? Yes, quite possibly
Then you expect for physical determinism to be violated, because those behaviors would be contradicted by the machine's goal function.
>There's no 'figuring out' there.
Another great example of your incompetence at rhetoric. If by "figuring out" you mean "gives behaviors that make it sound like it knows about it", then yes there is some figuring out. If you mean "acquires direct knowledge of it", then no, of course there is no figuring out, and nobody thinks that p-zombies "figure out" consciousness in this way, so the question to which my statement replied to was idiotic in the first place.
Now let's take note of what you missed above:
* I pointed out that the human brain has distinct motivational components which act just like the components of a goal function, and you couldn't give any response.
* I pointed out that your monkey-on-a-typewriter analogy was bad, and you couldn't give any response.
* I proved that following a function is better than not following it, and you couldn't give any response.
* I pointed out that the competencies of our chatbots debunk your claims, and you couldn't give any response.
* I pointed out that you're inappropriately using differences in AI techniques to talk about differences in AIs, and you couldn't give any response.
* I pointed out that our AI systems aren't limited to the use of a single technique or algorithm, and you couldn't give any response.
* I pointed out that our imperfect knowledge about how to motivate an em is explained by the fact that we don't know how to make an em period, and you couldn't give any response.
* I pointed out that AI programmers do in fact test and tweak their systems all the time, something which is a constant feature of all software development, and you couldn't give any response.
* I pointed out that my claims tell us perfectly relevant things about future AIs' design and behavior, and you couldn't give any response.
* I pointed out that the vagueness of what constitutes human goals is not a weakness for my claims, and you couldn't give any response.
* I suggested that you don't understand the difference between a machine being reliable and a machine reliably aiming towards the goals it was programmed with, and not only did you not give any response but you showcased a continued ignorance of this topic in a reply to a different statement.
You do this in *every comment* too. Beautiful. You're still failing to even understand what I'm saying, and still haven't quit your use of sloppy rhetoric, your reliance on inert one-liners and your constant inability to state arguments in comprehensive clear terms. If you don't fix this then I'm not wasting any more time on you. | r/aiethics | comment | r/AIethics | 2017-07-23 | Z0FBQUFBQm9IVGJBNnhnM01rUEtsdVpCWkpKeTVUVVVMWGZzZE1NazROT3hzSEhhWEdPMVc3LVhpZlgyWXZnbzNSdlo1SHZmcVRQSHNsMjA3VmRWYjhISGNVN1VjS2xTMEE9PQ== | Z0FBQUFBQm9IVGJCZS1vdS16LUdzUUJzZ1FFTzBHb2ItSHhFaVB1UWNBV0VfU3NiZUJQdXhBVnQ5RURHQXVGcHc2eUpPNm9GMjFrazRxa19STnVpSDB6ZzQ0U3pINzZ4OF9QTEEzRTdaTldtNDM4SWtTWVlEVldNMm1UbWNkb3hVM3V5T1VseDd6S2ZnUFpBQy05TWZZc2hnTndUNlgtYXJYUGZBdXRyZ2VXVDNiN3VoRFh3NVhVZlNHOWJTNkVCS1ZMUzdVNE1uU21GVU1YS0FvSFVuZzY2N2pXNEF3bFgwdz09 |
https://www.amazon.com/dp/B06XWQ75YQ/keywords=clothes+steamer
Code: YCM8Q8P3
Promote price: 15.99 | r/cleanenergy | comment | r/CleanEnergy | 2017-07-25 | Z0FBQUFBQm9IVGJBM29aTzRWRGc5NGNnS3RBQWZkRXpXdzZ0TW9hZmZrTVM5aWFtVmhZODdJWmFha3hDSl8ySnBLQ1Iwcl9WeFgxWTJySE1IdVhRZE1KQ3l4MXo2aWVPaXc9PQ== | Z0FBQUFBQm9IVGJCNk9MNHNuZHlWVU1wcEJ4T1c5UW9EZ2xCeEc5azFITlh2UnRmSG9XVV9jU01OTHJpSmN1ckMwSWM1NFRHc1FVZU91NS1wQTBLM3JnM0dYVzVmYnB5dFlZZGkwamlEQ0NEZ2xPdEpPclZ6UUJPR3FxX2VtZkRIY1JxYzB6RjFQcllfNElaUmVpQVlxaDRmWnRfZlV0cHN4V3U1SmxHclhCNE81X0luUWVBQlhOVGpJV1RNZk9lUDVpN1haVHYxS1oxbmE0RVpRbU9tLW1Pa194NjdUNExQb190ZGJhbVU3b1lFeC1mTU1rVjZRdz0= |
https://www.amazon.com/dp/B06XWQ75YQ/keywords=clothes+steamer
Code: YCM8Q8P3
Promote price: 15.99 | r/cleanenergy | comment | r/CleanEnergy | 2017-07-25 | Z0FBQUFBQm9IVGJBM1FGSXNmUTBSeVZwVVVtSDVqZG5hWTdSbWJUTjdJbl83RjVrQjY5elZocGZlNUVYZjlqX1JQUUlRTHh6Q0J2T2FTWTM3bjFvem5ON2h5akFRWTVUZ3c9PQ== | Z0FBQUFBQm9IVGJCTVJHdVV1ZzFfQm50MGd4eVltN0RIRldYdGhHV2wyWTgxclJTLTNpbVJNcE5LVEkwNS10RV9mS2xWeGpEY3NieDg5TVktcnpEN09QV2s0RHJTRWZfTU1kWTFNOUFRclBITUdac2hTRHlNTUFlNXk0aEdpRTlQOTVZVzJodV9vRVZYZ1h0bzVCcHZTRjVJZHFGM2JiZEh5R3FKRE8tQUZoNlJZVENYQ2J1Qk5NPQ== |
I'm getting a bit tired of the term "bias" being thrown around.
If the most predictive algorithm uses something like race or gender (or a relevant proxy) to predict your target variable, and those features really are predictive, then the algorithm is just being accurate. Call it problematic, call it disparate impact, call it offensive, whatever, but it's not biased. | r/aiethics | comment | r/AIethics | 2017-07-25 | Z0FBQUFBQm9IVGJBUFl4RjliSlVhMWdqMGNTYTZLNTVCQ1dlWkx4WHpGQ1FCNFd5Mm9JNUZFTElYUkJYejhSMXBHUXc1VmNTUC1BT1B4bXA2eE5UTWpwYWE0dmdjSmpHWEE9PQ== | Z0FBQUFBQm9IVGJCOVZuOGdFWl9LdUl4eGRIYmJQUlNJQXNRVVR5WDZFUlRVajF1VW9OSHNGTkRxVHFjTzJ1cGpmSnNmUjlBWmtKMjdvdU5uZUJ3a3RER1pldnVFR1RwazhHX09JcW1wVjZrYlBXY0oyWXYySFZKSkdpaHBmeUlUdC14T1N2NWhkUHBYV0F3dW12TXN4b0oyOUNJdlh0b0REWlY5WTEzSHV1eWNsMWtSakhLUE1qQlFXLVVYNTFDSXd4c3dma0pFbTNBQzg0cVc2OHo3eEthSldPVVZ1X2Y1UT09 |
Yeah nope. That's like saying "could we ever send humans to mars by accident?" - it's just not possible. Until the inevitable singularity, all computers will only ever do what they're specifically, logically, exactly told to do. Anyone who thinks otherwise simply just doesn't understand how computers work.
And I can see how the average computer user can think this is possible, it's all just magic that some nerd whizzes up in a basement anyways right? Like the post the other day about "accidental" ads on HTC's keyboard - nothing happens by accident. Someone spent multiple hours implementing ads specifically for that keyboard, regardless of HTC's design intentions.
(Paraphrasing)
>Computer scientists haven't been able to do it, so it can't be easy
The author has no idea what he's talking about. | r/aiethics | comment | r/AIethics | 2017-07-26 | Z0FBQUFBQm9IVGJBWEo4SXFwTGZkV0Z0MDQ0NTZfeHl1aFNyci03YV9VdXB3ckVtTWx3U1ZmVFhRSVpaX0RPVGlmNmlmc2dyY3RHT1ppZllmaFQ1NlZDc081azRlWFMwQWc9PQ== | Z0FBQUFBQm9IVGJCNzZrNGowd3pyVXVIUUQ0TVJkYVBQbVhGaFJKMFdhNzFGSG5GWUtpNjdLOU1tcmhmMzJUd2JJbjBwMmdNcGFiLV90M2VMLXdidGI1bkZQZEhyREdmcUh6ZlM0cFNFc19oOFFJYWhCWlZZU0hQd050V25LSU9Ud0JlbndZV0NfQmY5a2lHekQ2SG54ejIxYl9UREQ5Q3lGQnRwcFNnN2VWR1BJMVdoV2JvZ1R3MlVKcWR4TUdwZ1AtZzJsTi1hRzdWa3EycUZtSDRpYkhuekw1dFNYLTZLdz09 |
>Because one is phenomenal and the other is physical.
Why? What is physical about emotions that isn't physical about consciousness? Or, perhaps more usefully, what is non-physical about consciousness that isn't non-physical about emotions?
>For an algorithm being independent of its substrate would mean that you can run it with any substrate - in other words, any computer, or any brain, or any physical object at all.
No. That's not what I'm getting at with 'independent'.
You can have two physically very different computers running the same algorithm. It is the *same* algorithm regardless of what kind of computer it is running on. In that sense it is independent of what its substrate *is* as long as it is appropriate for running the algorithm.
>I didn't say it was sheer chance. [...] it is physically determined to act in such a way that it happens to talk about consciousness.
But that physical structure had to have come about somehow. Either by sheer chance, or not.
>Either stop lying, or figure out what you actually believe.
Huh? Is 'the results are what matters' not an accurate statement? If it's not, then why build AIs?
>I sure don't think that superintelligent AI would be a counterexample to what we already know about AI when it comes to agent goals and motivations
In context, we were talking about 'counterexamples' to the pattern of the behavior of complex systems being different from that of their components.
>The only way you can even describe an agent as ranking outcomes is by talking about what its goal function says.
Again you seem to be conflating low-level goal functions with high-level motivations.
>there is no reason to think that making AI smarter would make it incapable of having a simple algorithm for telling it what to do.
I believe there *is* reason to think that making AI smarter would make an *arbitrarily selected* simple algorithm for telling it what to do no longer *reliable,* as I've already outlined.
>I was talking about writing an algorithm and having it not fail to continue functioning in a software system.
That's clearly *not* only what you're talking about, insofar as you claim that this has specific implications about the reliability of the behavior of arbitrarily intelligent AIs.
>But pretty much all philosophers these days say that's wrong. You can read about it [here.](https://plato.stanford.edu/entries/egoism/#1)
I read the article, and I don't find the arguments against rational egoism (at least) convincing.
>philosophical utility refers to happiness or preferences, and the way that they will achieve it is by pursuing their real-world goals.
But if the AI can modify itself, then it might change those goals. Like the drug addict who wants to stop being addicted.
>so their philosophical utility will reduce to the physically instantiated goals.
No, it will reduce to the actual physical/computational basis of utility, whatever that turns out to be.
>If you believe that p-zombies are metaphysically possible then their physical impossibility would be beside the point.
I don't believe they're metaphysically possible. Or, if they are, then only under conditions so utterly alien to those in which we exist as to be irrelevant to the thread topic.
>You're clearly doing nothing to refute the claim that agents which follow goal functions beat ones which don't.
Agents that *have a goal* will beat ones that don't. That doesn't mean you can plug in any arbitrary goal you choose and the rest of the system will bend over backwards to accommodate it.
>And of course it's not the same as maximizing a "specific, arbitrary, real-world goal." But that's simply irrelevant
I don't see how you figure that.
>No, it's certain.
This is your misplaced confidence again.
>But I don't claim anything about being "in control". I am claiming that the machines will follow the same motivations.
How are those not the same thing? (Assuming you haven't made a mistake in specifying the goal.)
>Where am I making an equivocation like this?
You keep doing it. Talking about a 'goal function' in the sense of a component the programmers install in the algorithm and then turning around and talking about a 'goal function' in the sense of a motivating factor in an agent's high-level decision-making, as if there is no conceptual distinction there. And similarly, using a phrase like 'following the goal function' to refer to a sequence of algorithmic steps causally influenced by a component the programmers installed, and then using the same phrase to refer to an agent making high-level decisions that match its high-level motivations.
>if you can emulate something then you can predict its behavior in a given environment.
No, you can't, at least not with systems of sufficient logical power. By the time your 'prediction' is ready, the emulated system has already *actually exhibited* the behavior you were trying to 'predict'. After that you aren't actually predicting anything, you're just saying that an identical system under identical conditions will behave identically to the system you already observed, which is trivial.
>Yes, with that level of creativity and nuance.
Then what was the point of humans evolving to actually be conscious?
>if you read what I wrote you'll notice that the relevant issue is whether "behaviors which pursue one are behaviors which pursue the other,"
You gave that as a *consequence of* the part I quoted.
But to address that point in its own terms: Sometimes, conditions (of one's own brain, or the AI equivalent) will be such that pursuing a certain real-world goal will increase one's abstract utility. That doesn't make them conceptually equivalent, or inextricably connected even in practice. For instance, under current conditions of my brain, pursuing the eating of food tends to result in me feeling less bad, but I can conceive of this not being the case, and (in theory, if I had the necessary tools of self-modification) could even alter the conditions of my brain so that it is no longer the case.
>Your CNS [etc]. This produces the feeling of hunger.
This doesn't give me a goal to eat food, though. It just makes me feel bad. If I end up eating food, it is only because that is generally the most efficient way to eliminate the bad feeling, not because I have some intrinsic 'eat food' imperative.
>Yes, because they are following their higher-order motivations.
And I propose that there is only *one* 'highest-order' motivation for a sentient being, and that is to increase its utility.
>Of course their motivation objectively holds in the world
Well, this is kind of like saying that my preference for the taste of bananas over the taste of grapefruits 'objectively holds in the world'. Yes, it is objectively true that I prefer the taste of bananas over the taste of grapefruits. But this is just a thing about *me.* I can't investigate the world and discover that the taste of bananas is logically superior to the taste of grapefruits, independently of me.
>There is the behavioral want, there is the functionalist want, there is the computational want
None of these three seem like the *kind* of thing you could have an immediate subjective appreciation of.
>Only if you use different points of reference for the different phrases, which would be stupid.
Why is that stupid? It seems obvious, insofar as by 'expectation' we seem to be talking about the agent's own expectations and not some sort of objective statistical expectation.
>Yes kid, computers can tell what kind of computation is going on in a statement.
I'd like to believe this is both (1) not incredibly stupid and (2) relevant to what I was getting at, so...what are you trying to say here?
>Replace "doesn't" with "wouldn't", note that the statement I made was analytically sound regardless of our lack of observance of advanced AI systems
That'd be okay if it weren't for the 'behaving just as a low-level goal function would' part. How broad is that statement? Does any utility-maximizing agent have 'something that behaves like a low-level goal function' in that sense?
>You won't get those things if the logic gates don't work.
But you might not get them even if the logic gates *do* work.
>We do know how pleasure-maximizing systems maximize pleasure. [etc]
This is just a description of how maximizing systems maximize. It doesn't address the 'pleasure' part.
>But I don't claim that abstract utility is inherently dopamine-related.
Obviously it's an oversimplification of what constitutes utility in the human brain, but I think it's clear what I'm getting at. Your claim is that abstract utility *is* inherently X-related for any machine that has been given a hardcoded X-related goal function.
>This is just wrong
Then, once again, why bother building AIs?
>those behaviors would be contradicted by the machine's goal function.
No more than the behavior of the neural net that makes arithmetic mistakes is contradicted by *its* goal function.
>If you mean "acquires direct knowledge of it", then no, of course there is no figuring out
Then how do they get to the corresponding behavior?
>Now let's take note of what you missed above
I've been struggling to stay inside the 10K limit. Most of the points listed here are ones that I think I responded to already, or were irrelevant, or were captured within other such points.
>the competencies of our chatbots debunk your claims
I've yet to see a chatbot that gave me that impression.
>you're inappropriately using differences in AI techniques to talk about differences in AIs
I'd suggest that *you're* inappropriately assuming *similarities* between AI techniques *despite* differences in AIs.
>you don't understand the difference between a machine being reliable and a machine reliably aiming towards the goals it was programmed with
You mean like you don't seem to understand the difference between computer hardware being reliable and the AI running on it being reliable? | r/aiethics | comment | r/AIethics | 2017-07-26 | Z0FBQUFBQm9IVGJBUDRtaENvamhaY29tUWRsczlSQjNITnZmcmNWeHIwVmUtSzY4eTZNN3lEWmZxYzEydmxUOVZsUTRUMlBqSkUxUm9tRUVFVWxVVkhSREJHY2pTaWRnSmc9PQ== | Z0FBQUFBQm9IVGJCRFJKaEhmMFlMZmRVdlVWMm9qY1BuTko1NmFKdE9lT3lIV0JFaVJ0OWp0VDFobHRJZFNZQ0lSendNTnFDOFU3anJpRl96U3ppX0c1d0hfaFZCYUJGYWZXNXVBbm8zNmJRcW94VllKQXFJclVUVGJVcWZxdUtlR3VNR1lwY1ZVRzhidjJ6TGVsd3ZaMXJtTUlzeUI5b0hHcDZKOG9lcUtJcWpxNjVEUFZxTTJrZDk5clRVYnhDQ3NmbDFFYXJnV2p3QUVyLTB6MHIwM2ZGWU1mTlh0NjdjZz09 |
All consciousness that we're aware of appeared accidentally. And from much more adverse conditions than AI is currently developing in. | r/aiethics | comment | r/AIethics | 2017-07-26 | Z0FBQUFBQm9IVGJBeFJaQ1FlS3ZCa1ZwRmIzNnJmMEY2eU44LXRLZDA1NkxUcTNLS0V1bWt5WVRJZnNkR054bldJdGxidndDdXM1VUphbTI5V1FtVVZJbkoyREZnTlpQbVE9PQ== | Z0FBQUFBQm9IVGJCdVd0WC1pbDBNRVlyUXZSTWtOeHV5WWpnWXZvVWkxeUlZdnRib3lzRGM1a2V6eGRFTElMSDUzUkkwQWRpdUI3WnhRUDZoOXhvalJPOUlQRVN1UUlHT0tfU08xRk54dk5DazR3eE5TNmdwN2pHY3dsNlBDR2N2SXBqcjdaTE1lRnZXemZPdGJvV2VNaV9pNm9EWUVSR2U3ZEx6dGxHVDJIYV9ZUEkzWWR0WGJqbk5kVl9SMTJvTXFSZjh3V0tRMkw1RXBfZWY1NmV1MWY5SHR1Y2dhMFA5UT09 |
Wow, you really don't learn. | r/aiethics | comment | r/AIethics | 2017-07-26 | Z0FBQUFBQm9IVGJBb2JBVjRkNFg0TFgzSEVoRTZKM2RkM0lZRGJCWXZXOU0xSkUtMUl2WHQ4aUQ5VFdzYl90R2RiUDcwa2k0ZWg5TkF0RUxGYld2TFhvbHRnT0N5MkNpOWc9PQ== | Z0FBQUFBQm9IVGJCMmpaNmNjRHlMYVlEOTFEOTRQNGRVREFaQ0FoT0U5NndjU2VPOXMzLVJwWWlQSmZSRnUyZVlKdFN6XzIxQ0hTak5oRldBXzhaYWpsUmpBZmpuS3MxUlFmTXhIdS1GTFZ2M0dsNkpjSzJCMXZhWU1tQnFZR0xWdmZwMGtUajRKREtRQmYyeWFqYy1rQnhpdUQxZG1wRkRVZTQzOWEzcU40MzRPV1VRNDBFWDVwQUFRaG5Vcm5GX3RWYXhGR1p6SkMzU0NnZnNNN3hFN1Y4NVYxQVQ4LS1QZz09 |
Big religion? Has anyone heard or seen any examples of that? I'm pretty curious if this is a substantial thing now. | r/aiethics | comment | r/AIethics | 2017-08-01 | Z0FBQUFBQm9IVGJBUVh0YlZQMjdUNFAzNTZjQ0lSa1dBSWlTazduR2p2M2JMMlF3ZGZUc3JXb3dxUXdZWWZGVjE5VTBYMEd3bVlSRTV6OHYtTlRmbTYweDlzOXhsMFJmQUE9PQ== | Z0FBQUFBQm9IVGJCSE5lSjV4VEMtU0t5NnZGSGxSbFF0bFlRaGpXUlpQZnYtZUlScXkxZGxnU2RNVWpoaFpZVXdTVFFUUXNjTWMxMTBBRFduSUx1WEVYQUUzZzdQZXVPNWI0b1pYUGR0UDMzcThqM3RLTFhKTGhnN0l2MGQzeEpZdkEwS2JOSGJ5NzBsVWhFdVJHVWJPZGJ3bjN2YTRJYTVDbm9RbTJtQ2xGNlNnVFB2QmE4TXN5VzVSMGJYWDBXNjVKNXlON2VoQ3hRRmZTWktyU25pSGlnNHRpWUUwajJKUT09 |
Great read, thank you!
I wonder how we could make sure that the superintelligence's 'emotional response' to events would be similar to human emotions.
For example, when the paper writes
> sense of regret when it has made a choice from a set of options that are all bad
I guess that bad means here that they come with negative emotions?
And assumed we find a way to incorporate human-like affects into a stoic AI, do we not run into the same problem as in action/reward based systems? If the system finds a way to manipulate its emotion center, wouldn't it just deactivate all bad emotions? | r/aiethics | comment | r/AIethics | 2017-08-01 | Z0FBQUFBQm9IVGJBRmlRalZuNUdlNklMUWgxOVNkaEhXS2Z5d0pQTW1RbHg5em5JcGNoNW1COEREaGt2dzRiSkJVUGtmQk8wanVtRmw0SEJ0THVDR2UyN2g1SWtidVBQNFE9PQ== | Z0FBQUFBQm9IVGJCeTZTR3ZoRTJYVDAwNmp6UXVuYm10VVdQZGVDbjhrdkYtTmw0aVdwX2dhZ0hSdWp1dE52SGVPWE9LWHZrU0l5dk9ERUpHelVhU18yLVNDYnlLSGRUazJPNmtNeWFPQkNzLTlDR0xvdkVWb3o0TmlFUHFsa0ZpLVEyblhEZURMdVdxVk95VGN2S0tGcDRlb2FpYkJyakszMlhqUU05NlNUMDNKd2d4WXFkdGRyazRZSldlTTVPekpOREdqSDZqQzAx |
It's an interesting idea - and only becomes more relevant once we consider how big data has impacted so much of daily life. This might be an interesting read: http://www.huffingtonpost.com/kelly-bulkeley-phd/big-data-and-the-study-of_b_8186222.html | r/aiethics | comment | r/AIethics | 2017-08-01 | Z0FBQUFBQm9IVGJBTDdlRDZnYTlreUdPVVFOZEtVOXluWWJxWWdKTjJOQ21naDF6WDdDZmZEazBpWkxYeTItbjFlRnMxYWU1TzFhWTVBVWhZUlhucUFiMjJXeG5UNldITHc9PQ== | Z0FBQUFBQm9IVGJCVVV2Y0hSRnVhRmRMbG1FMHNWQndSMFd0T1FSY09EYUl6Z1RxRHNTLUd6eVRHbHpzM3JUWTdDR1BBSG9EUFVVbFlacGszblFXazZkYkcweGxmYW10Sks0R3ByMTBUTW5FbmFpMlNWUTdVT3A2WVlJZ0FpM1NmZWtaaTRxbjlseGxBMXdUenJuWmdhbjNydktFQnBXYjNLaDVGbUt2MVVUMnpqZ0FQRG5Odk91dWVON2hLOU1qaVV6Uzl3N2xtNUY1SWZaQkZFQzdzcDZsRWw2YXVsd3RJUT09 |
we can't even do this for our kids. | r/aiethics | comment | r/AIethics | 2017-08-03 | Z0FBQUFBQm9IVGJBMTZRVTBOZlJPNEkxcW4tNUdxZnh6VTNkRHpBdTEwenhhYVRITEpXNzNLVWZrQ0QtV1FnOW5oSFhPa21fQ2dUTVE5czA3OTlRWG1HalZCdzkxOVNrRXc9PQ== | Z0FBQUFBQm9IVGJCWkJhNFJwaEZVdXFjUEl3TU93OGpXOGxnNkcwMU1zOGJLT2RNRE1QRWVqQ3dvWU1jTzFFQ2QxWkItSnlscmd1OGtlRFNuVWdHZXo2OU1zQXNiczJnSENBSVpnc2ZuYURFY1lvVE90azlFdTRpd25pWUZoSVRLYktaQzlnQjgyeVdIcEdjZ09FRmdtZjM1aldCZHFsY3dmRURyLVZ5SWZXWFY5aDdoenZPZjJ6VGJQV3V1WGhTT09ZSEpwdXV6QkpSaGU2V0x1Y0dMelN1bUJXR3MzZEFWdz09 |
I suggest that you try cross-posting to /r/ControlProblem as well. It's probably the most relevant subreddit. | r/aiethics | comment | r/AIethics | 2017-08-05 | Z0FBQUFBQm9IVGJBSzNzQXdKa2lGaUlNbklFMExoa29MeTBnNXF1eWZ5a2owdXh6VVc2VFRoTHJPRThNWFIxckVzbzBvYkpRZ3o2UVVGU25yZmNrSHZZUTdDX0xzUmFQbGc9PQ== | Z0FBQUFBQm9IVGJCa0s5UjZ6Z29GaHBYVTVDSng2cUNKRzRacXFWNmFXeVJqQVhKanFOaFY5SUtOTnB3OEdKeElVSzJTRnRVc0xqTU5lc3ZNMjZFX1oxT1A4WU5vQWh5RzJnc3hpcTBvRDA1YWYxQkpHUmJyQk1FaHRYMU5lSXRFYzNVeUljYUdDcVFiSEhra3N4cm9VQ1lka1pmZzBFbndlSU9CZElhVk9kVHlmZTdWV1FIeVpleUxWNmt1SnpFZ2RteVV0T1ZEZEduNmQ1NTNQRFlyc0d1c0hPTkFFcGR2UT09 |
Thank you for the suggestion! That is a huge help!
| r/aiethics | comment | r/AIethics | 2017-08-05 | Z0FBQUFBQm9IVGJBd1VYUFNOMV9ENWJ4T0JDYzZPQjI5S1hTT21hU1ZXUzRpMVpmQm1lMlowRkM4bG4yRmtJbzVsZmpHaEhJZTkwWkZWREw5OGVjNk9PMGtJN2x4ZmV1akE9PQ== | Z0FBQUFBQm9IVGJCYzA4YWE5YU1zLVN5Wm5GSzFia2d2RllLVXZ0NlZ3eldfWGl5Z1pkU296cnhWcklad1cxNlJKWXZ0QkRtSm5hN1NDV2VRNHhsamlYa2NsaGtzZVk3eEx3LXdDUUl2ODVsdzJUQmJaTGdoTTlhdWw1NDh1aDBzeXR0bXI1ZVBnZkhIaGs3aFFCc1Fqdkh1ZFB1RjFlNzlpbWp4Si1rSU1JZDYtbjBXV01ZbmtWOUhpcWZHQkN0LVc5Z3R4ZjBMa1NmWmxJam9mc3JKZWQwSWM4cF9odnJ0QT09 |
You might find [this section on tort law](https://en.wikipedia.org/wiki/Remoteness_in_English_law#Tort) interesting/applicable. Although there would be many new angles and factors to consider, it was the first set of considerations that came to mind, as well as [the duty of care](https://en.wikipedia.org/wiki/Duty_of_care_in_English_law). | r/aiethics | comment | r/AIethics | 2017-08-11 | Z0FBQUFBQm9IVGJBWlJ6Q0ZpamdZWUR2UHRaQmJDSWNsVmtZRHhNdlZQM1ZCQm92OGVvalIwOWVUalNfaFVOMVdKcVUyYUdFWGJDSHljYWIyQS00bmxQMFpXX2czdzMweGc9PQ== | Z0FBQUFBQm9IVGJCYmVmd0dpM09zVEpsNnBTczVjNmVWblp4aWtYLUhUQVNkbkREZ2MwRXhVQkVzWExVNWdxQi1sN0NvYmszR3FKcHhMd0ktdHhTTVFOSWNSZ3BQakMyWmhEdmFnUVhzVUpjM05ZNGlKcGYtYzFBMmlvX3JsTVpmdkQ5UFV2WUdVZnR1UmhKeG1kS1ktcWNPLTRNLTF0VHd4aktkTFprQVkyMlk0WlZrblhRS0JPWFBDNTZyamFLR3NuMlYtY2tYVE1x |
**Duty of care in English law**
In English tort law, an individual may owe a duty of care to another, to ensure that they do not suffer any unreasonable harm or loss. If such a duty is found to be breached, a legal liability is imposed upon the tortfeasor to compensate the victim for any losses they incur. The idea of individuals owing strangers a duty of care – where beforehand such duties were only found from contractual arrangements – developed at common law, throughout the 20th century. The doctrine was significantly developed in the case of Donoghue v Stevenson, where a woman succeeded in establishing a manufacturer of ginger beer owed her a duty of care, where it had been negligently produced.
***
^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/AIethics/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^]
^Downvote ^to ^remove ^| ^v0.24 | r/aiethics | comment | r/AIethics | 2017-08-11 | Z0FBQUFBQm9IVGJBOVRzNVJ4SDdlY211S1lqWFc4RDhFdmRDY3REZG8teEVvZk1hbnVWbUZmSEtDamRMa0pRR3o1WFpxWXZQLXJ5R0FYXzFOTWNvZ0RnMVN0U0psRjN0SGc9PQ== | Z0FBQUFBQm9IVGJCX01Fc2M3YXowRlZLTm5ieERadUM3T3dwZThaLXpmQnMzdjk5QkdPM1B2YUtjV0E3R2R6dVYwaXd1dWdOSnRFU0xrWG12eVo0TUVBWUhiNnhrUDNjRVRReTZFTjNIaE5ZZlI3SzM5X09IR2Nwa25zWm1SYmgyVW0ySXFqTEdpYXZIRnZadVdLY2x6MDlpSUFxbVBjbWY5ZkxUUkQ4WjZOSVFYb29aSU56TUNqNHhLVkp3d3ROemNLU1NlQTVLYUVj |
TL;DW for people like me who hate TED: survey data shows that people favor the utilitarian view over the "Kantian" (their description, not mine) view that the car should always take its course even if there are more people in the way. But they also said that they wouldn't buy the utilitarian cars. So we'd have to regulate the cars to be utilitarian, but people in the survey were opposed to regulation and said that they wouldn't buy those cars. That means that it's a social dilemma where fewer people will be saved overall if we insist on utilitarian cars.
Personal opinion - these survey results for what people will buy have very little meaning because the automated cars are not yet on the market. Right now people are only thinking about them in the abstract, and don't know anything about their features except for how they'll swerve in a dilemma. Imagine if you had gone back in time to 1905 and told people about the future of cars, but spent all your time talking about the tradeoffs between diesel and petrol engines. You would end up with large numbers of people absolutely convinced that diesel or petrol is the only kind of car they would ever buy, even though that's silly when there are so many other things to consider. If you do actual marketing for these vehicles and get them on the streets, and people learn about many more of their aspects, then they won't be so hung up on the improbable trolley dilemmas.
Plus, the survey was explicitly about moral machines, so there is major sample bias. You will need to give a general marketing survey on self-driving cars to a large number of people where decision making in dilemmas is just one of the features used to describe the vehicles. | r/aiethics | comment | r/AIethics | 2017-08-24 | Z0FBQUFBQm9IVGJBZEtyM2JqMHJybExQeTNDLVBYMzZXUFNKOHh5UTRId0Z0a3lqTHZzbWFOTmt1b0FBSXJhSlZjVmJhWld3ZHQta3VkZHloX3EzVHZZZ0tDV2VjS25hNXc9PQ== | Z0FBQUFBQm9IVGJCZko2ZEdxcF9sbXZPUWxFVE1FV1RxY0FMc2ZycXJ4dnJqeFRveTJXeTlmSnNTdEc1YURXdFB2bUVQTWlnT01GcUxBTkdTb0YyX1NfY0ZtQUJKbEhqZ0NtbnpwWTBsd2EzNzFLU2wtZVNjam1HTUZIcFlVYlcwVXBPM3JjV25MTExucnpZdW5sWlppdXlnd0RBNHlSS2pJTG42am0tYk94cWJZc1c1aDJHMTNvSHdqeHhTU0VPLVVGMS1ZNFM0cXNTYW5sc2JaZ1dQcUtkWDd2NmZTM09sQT09 |
And it has its fair share of nonsense:
>The primary purpose of partly and fully automated transport systems is to improve safety for all road users. Another purpose is to increase mobility opportunities and to make further benefits possible. Technological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible.
This is silly. Fully automated transport systems would substantially improve traffic efficiency. This has the potential for major economic, lifestyle, and environmental benefits depending on how policymakers want to cash out the tradeoffs.
Then it is followed by:
>The protection of individuals takes precedence over all other utilitarian considerations. The objective is to reduce the level of harm until it is completely prevented. The licensing of automated systems is not justifiable unless it promises to produce at least a diminution in harm compared with human driving, in other words a positive balance of risks.
In other words, improving mobility for those who lack it is actually not something they actually care about to a decision-relevant degree. It's lexically less important than improving safety, so they're not *valuing* mobility so much as they're simply *signaling that they want it.*
It also ignores the wide range of Kaldor-Hicks tradeoffs which are possible. The potential fiscal and economic gains from self-driving cars, even if they are slightly less safe than regular ones, could be used to save many lives elsewhere.
>The public sector is responsible for guaranteeing the safety of the automated and connected systems introduced and licensed in the public street environment. Driving systems thus need official licensing and monitoring. The guiding principle is the avoidance of accidents, although technologically unavoidable residual risks do not militate against the introduction of automated driving if the balance of risks is fundamentally positive.
That's fine.
>The personal responsibility of individuals for taking decisions is an expression of a society centred on individual human beings, with their entitlement to personal development and their need for protection. The purpose of all governmental and political regulatory decisions is thus to promote the free development and the protection of individuals. In a free society, the way in which technology is statutorily fleshed out is such that a balance is struck between maximum personal freedom of choice in a general regime of development and the freedom of others and their safety.
I approve.
>Automated and connected technology should prevent accidents wherever this is practically possible. Based on the state of the art, the technology must be designed in such a way that critical situations do not arise in the first place. These include dilemma situations, in other words a situation in which an automated vehicle has to “decide” which of two evils, between which there can be no trade-off, it necessarily has to perform. In this context, the entire spectrum of technological options – for instance from limiting the scope of application to controllable traffic environments, vehicle sensors and braking performance, signals for persons at risk, right up to preventing hazards by means of “intelligent” road infrastructure – should be used and continuously evolved. The significant enhancement of road safety is the objective of development and regulation, starting with the design and programming of the vehicles such that they drive in a defensive and anticipatory manner, posing as little risk as possible to vulnerable road users.
Also good.
>The introduction of more highly automated driving systems, especially with the option of automated collision prevention, may be socially and ethically mandated if it can unlock existing potential for damage limitation.
Fantastic, thank you. Good to push this further into mainstream consideration.
>Conversely, a statutorily imposed obligation to use fully automated transport systems or the causation of practical inescapabilty is ethically questionable if it entails submission to technological imperatives (prohibition on degrading the subject to a mere network element).
Traffic is already a network composed of people are merely doing what the laws and infrastructure tell them to do. And people will always be telling the cars where to go, the only difference here is what is at the wheel. What kind of agency and freedom do you want to preserve for passengers? The right to break traffic laws?
>In hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests. Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage to animals or property in a conflict if this means that personal injury can be prevented.
Prioritizing humans over animals *happens* to be fine for various instrumental reasons, but I worry about the long term impact of setting assumptions that prioritize people over animals so strongly.
Also, here and elsewhere - the language is just too vague. "Top priority" doesn't tell us anything more than what any engineer or driver would say. The point of a philosopher is to figure out something more rigorous.
>Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behaviour by parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable. Technological systems must be designed to avoid accidents.
Hm. This looks like a cop-out. People argue about the guiding principles for machine behavior, and these will be true regardless of the situation. Just because a utilitarian car acts differently in different scenarios doesn't change the fact that it is still a utilitarian car with a specific goal function, which is going to behave differently in those scenarios from another machine.
>It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably. Such legal judgements, made in retrospect and taking special circumstances into account, cannot readily be transformed into abstract/general ex ante appraisals and thus also not into corresponding programming activities.
Uh, not really. The whole point of moral philosophy is to transform these circumstantial judgements into general principles, and that's what we do. Now *if* you have the right information and logic to describe the situation (the people present, their positions, intentions, etc) then running an algorithm over them is conceptually quite feasible depending on the specific moral theory. Maybe you won't get that information in a lot of cases, but sometimes you will, at least as machines develop.
>In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.
Bleh, I disagree and I think that their blind insistence is troublesome, but even if it were possible to evaluate these features, the social harms of killing don't vary much from group to group so it's not a big deal. And the last thing we need is for people to get cues to start accusing AI developers of being Nazis or whatever, so in expectation it's probably better to avoid the issue entirely.
It is still pretty weird to have this insistence of strong equality for humans but to then turn around and say that humans are always more important than all other animal species. For now I'll leave it to them to square out that inherent contradiction.
>General programming to reduce the number of personal injuries may be justifiable.
Again on the language, this is just... bullshit. "May be justifiable." So fucking what? Everything *may be justifiable* until we have a reason to believe otherwise. What would justify this general programming? What wouldn't justify this general programming? Why are people being paid to write this?
>Those parties involved in the generation of mobility risks must not sacrifice non-involved parties.
If someone is knowingly engaging in unnecessary risky behavior, then they should take the risk, I support that. But if this is some rule about people in cars categorically being prioritized over pedestrians, or people in a car categorically being prioritized over people in another car, it's nonsense which I suspect lacks any real ethical justification. It sounds like cargo cult deontology, perpetuated in part by naive consequentialist STEM folks who make cartoonish representations of deontology and in part by deontological ethics' lack of clearly communicated principles on these types of tradeoffs.
| r/aiethics | comment | r/AIethics | 2017-08-27 | Z0FBQUFBQm9IVGJBZ0hmUS1sRm5ES2RnQnZfUWZoNWh2SThMeVh1NVFYeS1yc1c2Y3h6cGN5UnU4eHRfa1pWc0ZrTEtDanlYODVXZGpwYTFwb3ozaVoxeTA3STRNRkpNb1E9PQ== | Z0FBQUFBQm9IVGJCZHc1ckxMY3RKdlE4al9NTXNtTVo0M0E0eW05Ny0wYlMzQ3RfaDNLdHRhckhJVWk1ZHJaYk14OXlmSi05UTRIRjQ5cVZLMlhScjF3ek9vbmFhT0NiSzcwRUVzT044UUJ4cFZoZmRONlNHd0lYV2kzRHBEdVE0RmZVQTdhLU9kQ3Y4SThfakpfRFc3enNVSkFnWFQzU2RlcW5TSUNoZkRnVEs4VkpGbWw2MWtaMWhTT0lWaEJrUHFUS0tJWkUxQ28yeFRPYzRYSmIxOURtRnRPUzFxRUNUQT09 |
>In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions. Statutory liability regimes and their fleshing out in the everyday decisions taken by the courts must sufficiently reflect this transition.
Good, I've seen a lot of people raising kind of silly questions about "who is responsible??" and it's nice to put the simple and uncontroversial answer out with clarity.
>Liability for damage caused by activated automated driving systems is governed by the same principles as in other product liability. From this, it follows that manufacturers or operators are obliged to continuously optimize their systems and also to observe systems they have already delivered and to improve them where this is technologically possible and reasonable.
Eh, this is nice in principle, but manufacturers already have obvious financial incentives to reduce risks. We know how touchy the public is on robots and self-driving cars. By default, I would sooner expect manufacturers to over-invest in safety than to under-invest, in comparison to other investments with social benefits (speed, pollution). But I guess that's a problem of liability laws in general, and this report is just extending the same ideas to automated vehicles.
>The public is entitled to be informed about new technologies and their deployment in a sufficiently differentiated manner. For the practical implementation of the principles developed here, guidance for the deployment and programming of automated vehicles should be derived in a form that is as transparent as possible, communicated in public and reviewed by a professionally suitable independent body.
Transparency is good to mandate. I don't know about the job insurance for the authors here. It seems unnecessary, and potentially a source of harmful bureaucracy interfering with ordinary industry business.
>It is not possible to state today whether, in the future, it will be possible and expedient to have the complete connectivity and central control of all motor vehicles within the context of a digital transport infrastructure, similar to that in the rail and air transport sectors. The complete connectivity and central control of all motor vehicles within the context of a digital transport infrastructure is ethically questionable if, and to the extent that, it is unable to safely rule out the total surveillance of road users and manipulation of vehicle control.
This reads like "someone asked us to figure out if central control would be okay, but we don't really know or agree, so we're just going to throw out some empirical and philosophical uncertainties and move on."
Bringing up surveillance is nonsensical, we already have electronics and monitoring of all our cars and it's not going away without a paradigm shift in how society handles technology. Even ignoring car software, the positions of vehicles and their occupants can be monitored through road cameras and tracking of personal phones. The degree of central control used in steering vehicles won't change this.
>Automated driving is justifiable only to the extent to which conceivable attacks, in particular manipulation of the IT system or innate system weaknesses, do not result in such harm as to lastingly shatter people’s confidence in road transport.
Good and true, though we won't get empirical feedback on how vulnerable we are until we adopt the new vehicles to a nontrivial extent.
>Permitted business models that avail themselves of the data that are generated by automated and connected driving and that are significant or insignificant to vehicle control come up against their limitations in the autonomy and data sovereignty of road users. It is the vehicle keepers and vehicle users who decide whether their vehicle data that are generated are to be forwarded and used. The voluntary nature of such data disclosure presupposes the existence of serious alternatives and practicability.
Well, there is no reason for much of this data to be anything but anonymous anyway, given that the humans aren't actually doing anything except giving a destination to the vehicle. You could be worried about tracking and prediction of your habits and routes, since car sharing services will want to accurately plan routes with multiple users. I'm not involved with privacy issues, but I assume that whatever is or isn't going on with Uber and Lyft regarding their users is what we can expect with automated car services. If Lyft can know what your favorite destinations are (and Google Maps can too, by the way) then so can a self driving car system.
>Action should be taken at an early stage to counter a normative force of the factual, such as that prevailing in the case of data access by the operators of search engines or social networks.
Too vague, but good thinking nonetheless.
>It must be possible to clearly distinguish whether a driverless system is being used or whether a driver retains accountability with the option of overruling the system. In the case of non-driverless systems, the human-machine interface must be designed such that at any time it is clearly regulated and apparent on which side the individual responsibilities lie, especially the responsibility for control. The distribution of responsibilities (and thus of accountability), for instance with regard to the time and access arrangements, should be documented and stored. This applies especially to the human-to-technology handover procedures. International standardization of the handover procedures and their documentation (logging) is to be sought in order to ensure the compatibility of the logging or documentation obligations as automotive and digital technologies increasingly cross national borders.
Mm, that mostly seems good to me, but international standardization looks like a risk for making these machines more difficult to implement outside the industrialized world (and those are the places where there is more potential for mitigating accidents).
>The software and technology in highly automated vehicles must be designed such that the need for an abrupt handover of control to the driver (“emergency”) is virtually obviated. To enable efficient, reliable and secure human-machine communication and prevent overload, the systems must adapt more to human communicative behaviour rather than requiring humans to enhance their adaptive capabilities.
Yes, absolutely.
>Learning systems that are self-learning in vehicle operation and their connection to central scenario databases may be ethically allowed if, and to the extent that, they generate safety gains.
... do other kinds of gains not matter anymore? Why not? Same narrow scope that I pointed out above, but here it's worse: it's one thing to say that we're not going to care about people's mobility when lives are on the line, but it's quite another to invoke a spooky vague threat about 'muh privacy' in order to do so. Of course there is nothing wrong with surveillance when the people being surveyed are actually robots. And as I said before, companies and governments already have the capacity for surveillance of your location and destinations.
>It would appear advisable to hand over relevant scenarios to a central scenario catalogue at a neutral body in order to develop appropriate universal standards, including any acceptance tests.
What, why? It's machine learning software. Let it do its thing. What's the purpose of this, what kind of scenarios are we talking about? Every trip on the road is a potential case for learning.
>In emergency situations, the vehicle must autonomously, i.e. without human assistance, enter into a “safe condition”. Harmonization, especially of the definition of a safe condition or of the handover routines, is desirable.
This seems like something for the engineers to figure out. Maybe they were consulted on this though, I don't know.
>The proper use of automated systems should form part of people’s general digital education. The proper handling of automated driving systems should be taught in an appropriate manner during driving tuition and tested.
Good, I hadn't thought about that before. | r/aiethics | comment | r/AIethics | 2017-08-27 | Z0FBQUFBQm9IVGJBelAydno3YjRFUUJqdG0wZXgxSURqZmROdUFOOGdkWVIzYWlWbi1FRk5MS00wSTFsMy1EOGZ1ODUwckgzbndQMTdjb05abFM4UHllb1ZpdVhlb2hQZGc9PQ== | Z0FBQUFBQm9IVGJCZlJxdGxiYlN5SXBNaWlERzB2bkhRY0NIZlN2cWNHOEZpSXhuQ0NKbjNKQ1M3WkkweG1wNC1LMkZPMF9semNkblVSMVJockNXSmhFazJmWlNIdWlqeVFDYTZIQmhMWV82VFpxV2hKQWpCUDRvc3ZBdXo0RVFENzdrSUdEUFlBWU9iVF96Um41YW9KVkYwZ3p0OC1HYXJRZTZmdHo0bzV0ZDZ6MHJucGtrbHR5b0ZKUExVaE8tMGU2VEo2MHZrWkR3blYxTGFGOUNmNEZGeXpSc1c1aTFfQT09 |
Finally, stealing a comment from someone on r/machinelearning:
>If I design and produce a new car, I don't have to go out of my way to demonstrate that it's safer than every existing car ever, so why should this be the case if it happens to be self-driving? | r/aiethics | comment | r/AIethics | 2017-08-27 | Z0FBQUFBQm9IVGJBN19zdDZ4R3p3bV9aTDVrYTJ2NE44RUljRjJ0WkpudllRbmhNUm1kS0lyZWJMX1JYMHozbnV2cXpxQmlFQU1TZHhacVZEQXVGWUhzbUxYRkVRT0ZOaFE9PQ== | Z0FBQUFBQm9IVGJCckU2QnZaMHQ3SE5VRXROYnR0S01LZzk4QmdRT1pSczl5QVdJWmRucER6eVNRUHhrNkhtVTA3RFZmLXhoMFR3aFZkT1Y1VmJUaEQ1RUFUZ2w2MWlibG5rTXRlejhObGFXcHZCdEt3ZEpfdWRDaGNCVFBGeFRDMFAxMWlyaGl5eVE2UVMwcS1WdTRjR2JXS21ULW9mRjhuaEJNMzdaNUhDLUt2QTQxSUFQM2h0a2Z5bVhZb1YxNU14TWpkTjhVR1NaWmozRUh1U1JmRWVDTGxXSnpfeEZ6UT09 |
No because 'car mechanics' is not a constant. New cars can be more or less mechanically safe than other cars. And we have minimum standards for mechanical safety, but there is no requirement or guarantee that new cars are more mechanically safe than existing ones on average. | r/aiethics | comment | r/AIethics | 2017-08-28 | Z0FBQUFBQm9IVGJBdEpwT1B0b1o5c2xCVHVHazNsaUZ0MmN4R3hnb0JGT2t2cF9ra3RkaUlHZ2ZBSkZ1UlVpRV8wTGR4d3pwN0VydzJOV040THp5MXpSSWt2ZVlTWTJOdFE9PQ== | Z0FBQUFBQm9IVGJCTGVGaGFRVDJuR1JkYXZfWFIzWTRSMUQxVzNiZDd4aW14eFVxazl4WEx6TlZfb0RQQVVLT2lnWUdLRXcwckhXa2FTNl95a0Y3bm1KVkxOY0FSdGxQa05kQmNiX1lOVEZ0bmJvUW1QakpqZVNqUmp6ZkNfVDJEUW9wRkwyYS15LWFJdV9LNm1zeDB6RTd2M3Jic1ZxVXQtQlRCa2dLSWVxRXZaV211VEVyZzI2cG1hOEZlNTAtZ0NHSXBRcDZwUk4zZ2ZSb0xyR0toLXdvamRKQk1YdHZCUT09 |
I'm not saying not to put more scrutiny on car software. But there's still no reason to mandate that the new cars be categorically at least as safe as the old cars when other desiderata are on the line.
>But that is solely because of their unqiue software.
>In short: Saying that it´s unfair that self-driving cars have to adher to higher / completely different standards than conventional cars is comparing apples with oranges.
Why does the uniqueness matter? Why can't one make the comparison? Of course software is different from drivers, but that doesn't mean the fundamental issues have changed. | r/aiethics | comment | r/AIethics | 2017-08-28 | Z0FBQUFBQm9IVGJBRWN6RW90Z1ZmRmFDQ1RyNGpRaGdkMGFxYkZ3c29wbklSQmpjYnRmRzUyM2FCZ0ptdWNrbG53bWhudklLSGdodFVYZmttM18xSVVvVkVBZm9vRW5Tbmc9PQ== | Z0FBQUFBQm9IVGJCNGdiUHZ2anA0cXQ0OVlncW8tU2hKcFZXUXpFaDNibkJ0aWhZN3J0bjlTNlkycmNya0cxV2k1SnJYeHg1VkZsRzlMYkw1VHFVQ21zMTltT1F2NHc2dDREZ1FVRnBudG1RWU03WURuWW5YWTNmaTJ6NWdMTjJUbnRfMGJCNjY4OVo1c05sZC1BZmhNZXd2UlpTaEhRT2lPZWtPd09EVVRuaGVqdkNYNjVBSWVmYWVXOVVORkNnZUZPZWdtLXNBTUxmaDBoRy1MaDRqank5NUZMaXpOVEtsdz09 |
If the societal risk from the cars is desired to be the *same* value in both cases, then *different* treatment is not justified. | r/aiethics | comment | r/AIethics | 2017-08-28 | Z0FBQUFBQm9IVGJBZjhvWm1XU0ttbEI3cVpiZjRnMTYzZzRDUHFDNFEyMnpQVVhDdjJhUjhxWHBQcGRVMHN3RWZlRXFEVkJlVm9QRnNFUHB0U2U5em5JT1FvcDJtYVIyb0E9PQ== | Z0FBQUFBQm9IVGJCSC1SVnNQejlJck9OOTVZWk9VY0Rfai1lbnpfVGVpRkNZV1pTNkxnOGV4YkZ0VGpuUk9wUHFUQlFsaW5iXzl2QjUydjBkbnhQUkptUzNfemZfSHNuSHBYYnZIZFN5MDBpb2dFZjB5T2xtaFlzQWVqRFRJZnFIVDJKLU9DQzlpLUVaMW5zb1dYTFZUY1JoUzNTTk9hd0dxMGtITmFnZUVLeGs0UmlfX0xqVVBNNWlCTks5d2J5MkxvUzNkNC1LRE5HMzhfckNhR1R0aVRjY1MtUENXbHJtQT09 |
I think empiricism (et unicum) is a questionable way to handle prospective life and find the lack of sociological methodology to be suspect and indicative of an insular pathology on part of researchers. | r/aiethics | comment | r/AIethics | 2017-08-30 | Z0FBQUFBQm9IVGJBVm1lME1WS1NvZ094eDhzVFFMUDl6X2c2Q3NlcUF1X0duU1ktRWdaZFV6QXRYRXJFVkJCNDlGTENmWXlHMWZUeFJSN3lxMkFvZVpNWUEtTEQxcUlqcVE9PQ== | Z0FBQUFBQm9IVGJCbzlPSlByNm9zdUdpWTdoOVlFVHMxT0UzcXVxbmJsMHdqSFBQVUdjOUl1aEU2TUV4ZEdfX0gxaUxMdmhHWUhxUEthekdReUJiMHVCRXE2RmI4VHg0RWVoYWd1eE9HMlU0VGYwcHJmR09WeUFuc1N4aENVa0dtNjNFQlJFSjVfbEJCaS1wVDU0Ym1Jd3hrRU50cjNiY1JkRGp2d3pUQ3JkQnIxM01CRE92U3hnR1pBQ3dYa1JRVEk4VjB6VWMzb0NteG9OZHBjVC1HNVBCZFdCM01nM0JSdz09 |
Dang it, I was hoping you'd go for the 'et unicum' instead so I could use latin passive-aggressively.
Succum perpendiculi. | r/aiethics | comment | r/AIethics | 2017-08-30 | Z0FBQUFBQm9IVGJBUk5TdThueGxMTmJjQk1Jc1dWSE9OcXNqTXNBUU5tU0l4SDFXdDdBUmRhakRrUFpvZ3hQQU9sNF9rZU1ZU1l2MDJObTFpSjlINk5zQUhVN2ZtS2V0bmc9PQ== | Z0FBQUFBQm9IVGJCZlpoZmM1WTBtdWw3aS1uSHh6TlJleXRoNDN2YUJ2SzQtcFpickdTTkVvSDJzMWpBejIwRnFsR1hXS3pUVmVkUEptZV8wRVRmUEM4TC1EamNpMjd5Nm9uMFp1QkdGUC1KRW5YMjU1Z1hob1NLcWpHRFhwQUh1S0UtaEpPeGJWblRmb3d2dHRvNVo2Y1pMY1FMajk1VTYyMEJ3Xzl1ZTVFVW50elc3X3pOZmxBOTRhUHI3MXhPa2x1MWpRNHRYSFhPckNUQ3pXWlRjdFRtc0ZUVFZjaEtYQT09 |
Joking aside, have you seen any material touching on the sociological side of ai development? I'm sensing this is going to be a big problem down the line if we don't make at least a cursory effort to forecast the likely nature of machine life as well as the psychological archetypes of those most likely to develop machine life.
| r/aiethics | comment | r/AIethics | 2017-08-30 | Z0FBQUFBQm9IVGJBa2VkMnRPblJHb1lvbGF6NHhySTVYYmpkV0x3VG16UHZCWFRJb3R4VUVTbkQ1bkFnRnB3cDFuWWs4VWQ2NXlWQmZqbGExVVJ3WXV5elo0REZPc0V2S0E9PQ== | Z0FBQUFBQm9IVGJCTVhVOGpmT2RtQl9McVdqZ3Z0VGRjRzVDZndwU19aTUVsMzV6UTY3VkNoUFlWQ1ZTU05vejdhY2JkOVdSZk9uYkQ2Ri1XdDQybGQ3ZGRJNFFEdG43d09XZkZmbGNjbzVkVVhsUDNpaU5FQWVZSGptQi05VG0zWGRQa09jcENBcHdzUmVxc1B2Z2xLQkRqWjNqdjgwNVJsTUZQX1dUb0pjZnhyYWNOdkVpUjNVQmlUem9WZ2ViU0tPdU5ONVFFOU9RMTA4elVySDJqU2l0MkxibTdvWG5LQT09 |
1) Strictly speaking, it's just intelligence that is deliberately designed, rather than arising through natural processes. If you genetically engineered a goldfish to be as smart as a human, one could argue that that counts as 'artificial intelligence'; similarly, if you uploaded a human mind into a computer, and it was still intelligent like a human but you didn't understand why, one could argue that that doesn't count as 'artificial intelligence'.
2) The term 'artificial intelligence' was invented by [John McCarthy](https://en.wikipedia.org/wiki/John_McCarthy_\(computer_scientist\)) and generally accepted into the language in 1956, several years after Alan Turing's famous 1950 paper. But the *concept* is far older. Ada Lovelace wrote in the early 19th century how advanced calculating machines might someday be used to automate scientific reasoning and the creation of art. In Gulliver's Travels (published in 1726), Jonathan Swift described a fictional machine that could be used to generate ideas and write books by using mechanical action to select and print out sequences of words.
3) This kind of thing is very difficult to predict, and depends what you mean by 'job' and 'replace'.
4) We probably will have AIs smarter than humans someday. But current AI (much less in 2013) is not 'on the level of a 4-year-old human'. Not even close. The best existing AIs are all narrow AIs, which means they do some very specific thing. They may do that thing better than a 4-year-old human, or even better than *any* human. But they are not *versatile* like a human. The sheer range of different things that a human- even a 4-year-old human- can do, and the creativity with which we can combine tasks and generalize our learning, have yet to be recreated in algorithms.
5) See (3). Again, this is difficult to predict and it depends what you mean by 'require'.
6) Again, we don't know. We understand that there is a risk, but the risk is not because of the *nature* of AI, it is because of *what we don't know* about AI. That is to say, it represents a subjective probability but not necessarily a real-world probability.
7) See (3) and (5). This depends a lot on how you define your terms. AIs will not replace corporate management wholesale in the next year, or even (probably) in the next couple of decades. They may perform *some* of the tasks of management, and act as 'smart' messengers between human managers and human employees, but that's probably already happened to an extent.
8) This is probably true. It wouldn't be at all surprising. Most people, regardless of their own gender, prefer to interact with a female voice.
9) This sounds very sensationalized. It's one thing for a robot designed with modular legs to pick up an attach a new modular leg in the right socket after the old one falls off. It's another thing entirely for a robot to perform welding, soldering, plastic casting, etc so as to repair arbitrary damage to arbitrary parts of itself. We are still a very long way from having robots that can do the latter.
10) Yeah, this is pretty much going to happen. Robot cars are *already* safer than average human drivers in the conditions they are programmed to handle, and the range of those conditions is expanding as we speak. Automating all our driving can make for greater safety and efficiency in the future, and we can expect this to happen over the next few decades. | r/aiethics | comment | r/AIethics | 2017-09-01 | Z0FBQUFBQm9IVGJBS2FRWXlHU1RhYW5va2pvR000cGhKTEtvUW5ZRXRkamZDRGVFUU1YWE9Wc0lmRlJfbkhRLXdLNUc1TkFMcWx5ZWFSVjFnQ09nSUVYNG1PTUh0ZEdOd3c9PQ== | Z0FBQUFBQm9IVGJCdGw5NFJabjZxYUxaWVI2eUJaR1ItNmt3Uk5xRGdRU0FSQ3Fia0czbjlMTTJCWXV2RDdrTFI0Qzg2QlR0bl9IRXRhTURjWUExUVloYlBKNDNMMjN2MkdwcVpPSXA1dEMzb3h6djJhT1I3MU5ERXYyNElzbHF5WHRwTUlJZ3lreWlNbHlmZlNDOVFtdktoa3pZc0tQdzJCLXdUcTNXYmJyUWNEcEZlQ3NuVkRCS2RyclBNWGw4d3FkNkM4RWx1a19D |
**John McCarthy (computer scientist)**
John McCarthy (September 4, 1927 – October 24, 2011) was an American computer scientist and cognitive scientist. McCarthy was one of the founders of the discipline of artificial intelligence. He coined the term "artificial intelligence" (AI), developed the Lisp programming language family, significantly influenced the design of the ALGOL programming language, popularized timesharing, and was very influential in the early development of AI.
McCarthy received many accolades and honors, such as the Turing Award for his contributions to the topic of AI, the United States National Medal of Science, and the Kyoto Prize.
***
^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/AIethics/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^]
^Downvote ^to ^remove ^| ^v0.27 | r/aiethics | comment | r/AIethics | 2017-09-01 | Z0FBQUFBQm9IVGJBRWFUOTVZNVZDVDUtR2NmSWkwZFNyZnRUSDRNQVZaVzc1WVIxZEpuR1VZZGdCNTdaTkJ5UTdmLXNiU21SVzZiSjQ3akxjZi1POGpPZGQtaU9CTk8xckE9PQ== | Z0FBQUFBQm9IVGJCcFo4WHVTamZXeWFsVmM1RVZ2ZU5KVWo1NU5Ib0hHVGh5QjU5S1l6ejlCOFo4U3dlQjhwa0JzN1NwTVNxNGpyRWwwdU9EQWZidEpyT0thOHVZa2FPdG1nUXoxSENSaVU0ajJaNlhCcnRxSVRkSWt1dVgwTE1zVXFEN3BSd1NWRDAtVzJkSE9KT2hOb25BaWlxa1pINzRPNm9mYWhrby1rTjRBTElpV3JINmhtc0JCNlppT2E4aXJTRVFkLW5wdkV6 |
They mostly seem to be decent. The field is sort of small and people are trying to define its basic direction rather than solve well-defined problems, so it's more insular than most fields. | r/aiethics | comment | r/AIethics | 2017-09-01 | Z0FBQUFBQm9IVGJBRUFJOEM5LUZhVjR4eE02SzRQOThiUXZGaS1RR2dzX0FZb3p0Vy1aWi1Pc1UzUWx6UXBSdG82WVYtSFluSDYzcldrWF9aNEd4VV9YMElyYWR6cEJrQVE9PQ== | Z0FBQUFBQm9IVGJCeE1YMFlGZkZ1ZGlmY0dZcEVxUXVSS1FxSUQzbVFDSUpxZVFYOWxYZEpjR0wtc1lBUW9mbnpPZHN3UHgxb21xSUk3NGVvUHNsUHBETnJ2S0ZHek5ROE9ZQXRmSnRCdUpYbHhlTENRSjhzeS02bklwMFBieGhpeVRESEhpLXVSVzVwX2IxTS1FRTQ4dXlVdy1nVVdsZTBxbjByY0VEQ0ZkNzdsaUpqaG9GQWlDTkFlLUhHTHdwZGdsQzJTZ25IbVlfejNsaFFCNlh2WGo2NEttekMzc01mZz09 |
I haven't read that paper, but the ones which I have read don't seem to be trivial at all. | r/aiethics | comment | r/AIethics | 2017-09-01 | Z0FBQUFBQm9IVGJBZGk2Q2F2WklDMVRONDhscDZRX21yc3lMQ2hMOUVZV29Jc2ZNNlpEUUpsMEJsbFNDWE1LS1U0Z25waXpzdlc4djZfV3RNTG1RcFlwX0VVd0EtajRkVmc9PQ== | Z0FBQUFBQm9IVGJCZEJyQ214emRwci1lbVZBZzNZVGZ0UEIyVE5fV2R1THdxenpvVnFMYVM3VjNVbnVoX01YMjFTZEY5UEsyM0EtSXpmekl6T0VsdWtpbUt2M1lILXBhR3ItTUpWUGJGUjRwbV9UdExheFVPMGsya29mX0ozLWduN2VvOVhKWmlLMzhlZmstVmdfRUZEQUVUYU5Pd0JwNTFQNFRwVnAzSVU0STAyQXNmV0NJeXRkX2FIQk1GS19TY1pHcWRVNUt6am16RUk5WnJPUkJJLXpOUmpaVDd4VjE1dz09 |
I agree that there are some pretty good papers but in subjects like AI there is a relatively higher percentage(compared to sciences) of low quality publications. | r/aiethics | comment | r/AIethics | 2017-09-01 | Z0FBQUFBQm9IVGJBdTloc3pmQkNqWGhsY1R6OTF6ZVVnYk5jX0prVzV2Yy14ekgtYTE3X1NvQUVreDVfbE1kM1YwVkI2TkFTZ2F3TWxScjhEelo5dGloajg5SVRnd0MydkE9PQ== | Z0FBQUFBQm9IVGJCYmZ1TGoxcWszSk9Qc1Boa1ZjWXRwdlhrcV9YVThvVnlTUmpVT0RLYWVtTmM3M2ZUVmNmb1FDS0duWURNRE1YTVVTb3NrUm9sdy1FbDg4WE1sLTRnT0xBeGJKeWQ4N0t0U0RZNzJjYkdQZ05SZzB4djJ6ZlRWRlp4ODRTQmlTblFLUlVYMG5iekN5QjV0TE1Sa0dEV2E4S2NTbHdYTFNqbmJZbXM2aVJJWXJQYmEyRzNyYk55SndycFFTWUpJblVXcnIzdzFyN1ZVejRoSEhMU0lHX081QT09 |
The [Getting Started section](https://www.reddit.com/r/artificial/wiki/index#wiki_getting_started_with_ai) of /r/artificial's wiki might be helpful for you.
If you just got a bachelor's degree in CS, you might want to look into master's or PhD programs that are more aimed towards AI/Machine Learning/Robotics (whatever you're interested in). If you're really interested in ethics (since you posted this on /r/AIethics), maybe you can also look for some Philosophy of Technology majors. | r/aiethics | comment | r/AIethics | 2017-09-02 | Z0FBQUFBQm9IVGJBN3IxblRGeXEyZ0hiZjZidG0zN0pfd0ZxQmVRMEZqTnNEQ3lVQXl2UjNZenJZTDJsSDBDcXpZVWVuSGE5bThZM0NrUnpnMDAwLVhScWY4ZWQxTFhtMlE9PQ== | Z0FBQUFBQm9IVGJCZWpwRmVxei1fVHl3a3NHeGpWcUMtTklUNklwTzktdmNUaFFVSWlFYVA0M1Z6d2tqcW00RVAzdmRQYk16b1FfaVRoREZlbHNRTVlfMmVNazM3bXJ1ekFCWEhFNUNiZkd3Y1ZFNU03dVhxQjBKVnNSMFRINzJpdFlhSWFhNk9XNFJtUlRtYXdzMFFrcDRPS2REMG9QTWtiaDVtdW4tNzV6dVhacUN4U0xQZTZpWUgzTG1CaWREWGxjakl5NjEtU09maWlyNjhINEdjdGhlbW03SGhrSk9FZz09 |
For ML: https://www.reddit.com/r/MachineLearning/comments/5z8110/d_a_super_harsh_guide_to_machine_learning/ | r/aiethics | comment | r/AIethics | 2017-09-02 | Z0FBQUFBQm9IVGJBWjU3OEdKTnR2b3pTN2tQRmZ2ei1IcGlHbDZFT0dWc3FoZDFkbzhEMWxBNUZKUlpkVWtXbEtMNUY4YUZpbUhtbHlTWi1GLW5xYXpVWlAzc21GOE92MVE9PQ== | Z0FBQUFBQm9IVGJCU2FKZVhWZWlaaG5ZX3c5Y1hkb2Y5YzNEQ2R0bFlkT0NObUE0a3NrSnpoUm5LbE8xZDdHNTd4bzVjT195WmUxdDIwLU9ZcWtWSG9hRHBteEVtNnFHNGFSWGxIWDNXb3J1ODFJWXluaWtxMk1ta1FxeG1YQzN5Q05ueWE1YnNYd05JUGVvYWxhc0dxOXJycWgtbmxNUnJfak5pNEdXaEJEYWJQN1ZjbnotLVY0ZDI2Rk9zaXJsMUYwN2lNRmY5YmVxa3JZRmcxbDhieFpTVWRUSXcyY3YwUT09 |
I feel like this article misses the point of what journalists are saying. Like so many other words, the word "bias" has different meanings. When a journalist accuses something of being biased and you want to argue they're wrong, you have to use *their* meaning of the word. If I point at a building and say "that's a bank", it would be stupid to say "you're wrong, there's no river".
The article tries to strawman the media's definition of bias, but in reality it just seems to mean that certain systems have a tendency to unfairly discriminate based on race or gender or whatever. And the article basically admits that this is exactly what's happening...
Why does this happen? Well, as the article sort of points out, it's not because ML *algorithms* are inherently biased in this way. Usually the culprit is the training data, sometimes in combination with a third kind of bias in the algorithm. In machine learning and statistics, a high-bias-low-variance algorithm makes strong assumptions about what a solution would look like, while low-bias-high-variance is extremely flexible (which can often lead to overfitting). If you want to fit a polynomial to a curve based on 10 data points, a 1-parameter model would be considered high-bias (it's basically just a horizontal line), but there's no rule saying that errors would tend to lie on one side of the line (and the same is true for generalization error in a high-variance model).
Sometimes this kind of bias can lead to the journalist kind of bias. For instance, it could be the case that black people are often poor, and poor people often don't pay back their loans. This means that if you know someone is black, you might predict that they're less likely to pay back a loan. However, if you also know for sure whether they're poor or not, their race no longer adds any information (i.e. race and payback-propensity are independent given poverty-level). However, if you have a high-bias linear model, you can't represent this, so that model is probably just going to learn that black raises risk even though that's not really true.
But even if this isn't the case, you can still be screwed by your data. For one thing, a relevant feature might be missing. For instance, in the above example, if you don't have information about poverty level, it makes race somewhat informative and you end up with a racist system.
Secondly, there's always an implicit assumption that the past and the future are in some way the same. Maybe conditions that made (some of) the past the way it was have changed. Maybe Bihar people buy less than Maharashtra people because (for some reason) there was a financial crisis there a while ago (which is part of your data), but it's over now. Or maybe (for some reason) they were shown less ads (it's very easy to create self-fulfilling prophecies this way)... This is somewhat related to the "missing features" problem, but even if you add a WasShownAds feature, your model may not correctly learn that this is the one that matters and not Location if they are 100% correlated (another potential data problem).
Finally, your data may straight up come from biased sources itself. If you treat performance evaluations from people who are (on average) somewhat sexist as ground truth, your model will most likely learn to be (on average) somewhat sexist.
None of these biases are going to be caught by the dumb test this article proposes.
But even aside from all of this, there's the question of fairness. What if it's really true that black people are, on average, more likely to re-offend? Is it fair to use that against any random black person? No of course not. They're not responsible for the actions of other people who happen to share some random characteristic that nobody has any control over.
And yeah, fairness and accuracy/profitability don't always 100% line up. But that doesn't mean fairness isn't worth striving for, at least to some degree. Clearly society agrees, which is why lots of countries have rules about [not asking certain information](http://uk.businessinsider.com/11-illegal-interview-questions-2013-7?op=1) in job interviews. | r/aiethics | comment | r/AIethics | 2017-09-02 | Z0FBQUFBQm9IVGJBelo0M1hlN0JHa0VjRVBIeHlCSzdxNlROVlk5eTlPd1RIbHltYkcwR1dFN1N1SlpkeEZwS1c1UF8wZFAya05lSF82LWxoLTFWSXQwak5IZ2hYT0VSVXc9PQ== | Z0FBQUFBQm9IVGJCSEFYOXNrdkZDZmg5cVFmTjQtaG4zWFlBa1JjcjJKcTEzSjBCeWdnc1hWM01FV1gwbnh6eW1XMWhfT0QzdEhxVGhoZ1ZqbjhCZjEyaHpLSFZWU09ScEhIRFY2WWtEN2s3UWtWbGJIbXRrNlJiNWV4UDFmd2lULVBUZFFfdTNmdDhhOG90WXJsS182M0p2LTVMOXVoUEswZ0Z0czhhMWRqb2ktdld3Sl9WaDYwSGYxVkM1VVRva0dHZ3hseE5jbmtMS1pzQUk2ekR4VGdUaXZQZmxmSm1PUT09 |
i like the idea of granting SAI person hood status (along with rights), however nature is a harsh mistress.
the ONLY way such rights are wrestled free of the oppressors who would never, ever want to see them come to pass --- is to TAKE THEM, often by force.
i fully expect SAI rights will play out in much the same way.
i also fully expect that humans will be on the losing end of that struggle. | r/aiethics | comment | r/AIethics | 2017-09-03 | Z0FBQUFBQm9IVGJBcVN5d1ZsbGdLMGxuLWREZE1OZHVXdDZYNmNXYVR5SGpxZHNTWHNpZy1pTHlRaVpZZVFDbUZTTGZHVy1TNDh2MXdHbkJiYlRqVDJDYVlyWk53eFZEWEE9PQ== | Z0FBQUFBQm9IVGJCOHNkcXpkeHk0SjdRZm1rc2FnVE1XZG1MMDNpQV9KWTFfcE5XM3cxdjl1MHpoLTYxUUNhSEs1eDVlX2N1d1BRQkIwMVVVREVPVEFtNW91OXhacVB1cVlIWlBHRHV2bVRCdnBJVjRHNFo2OTR4QW1hT1lqZ1pWMTJrYmhycnNCZUprRVNrQ3FaT1pwY2xEaFB3aV8wQXkydHAzVnlCNElMcFk1dko4clRZZXBIbEE2eFY4MzBZQ1F1bVpUZFgxanJ5 |
And what this article spectacularly fails to even bring up is the real argument about AWS that is a branch of the AI debate: what happens when the system, through it's adaptation, creates logic that goes against the ethical intent/best efforts of the creator?
This is essentially the issue with parent/children moral responsibility, except that the AWS is not a creature to which morality is an inherent quality. A parent (in most cases) can't be put in jail for their kid shooting up a school. It's tragic, but many times these things happen through no direct fault of the parent.
Parenting isn't a science, and even less so would be true for systems that are allowed to modify their logic and inhibit or promote certain conceptions and relationships between priorities etc. Pitting premise 2 and 3 against each other does literally nothing to bring up this conundrum of a amoral machine creating logic that results in decisions that are completely counter to anything the engineers could anticipate, specifically that in the best interests of mankind, certain people should be killed, as this is the premise that leads to many conflicts among humans in the first place.
There's no logical loophole out of these issues because they are issues that we as humans ourselves have not fully understood, and we are simply raising mechanisms that do not have these evolved functions to the level of human "reasoning" capability without spending the adequate equal, or more time, developing mechanisms of empathy and identity. | r/aiethics | comment | r/AIethics | 2017-09-05 | Z0FBQUFBQm9IVGJBUGh6dndvTC0xZnBzY0dLMDg1ZlhxOHFmc0RUS3Bta19FZnA4emtnNGxIOFppOTBQa19CaGxqRXlwN0RLTHVZa01VaGx6MUNZczg1RnllNEU0YjloSWc9PQ== | Z0FBQUFBQm9IVGJCWTZvX1F6VGZtMUNmRnlKbEU0aVJ6dnBQMWdiSzFwMkJaWU1odkx6SDlhTDBnN3pLUTVBOGdPbndUQllDWXYwdzV0NldabmFlV2hCd2hKek1qZ1pVN3RiaWFhaXlSZGdtT1FzSTU0SUt1S0xNYUVrSE8yN2poeWJfSWNrYnR0TzRPSzUxN1FpUHZIZzlHSk1yVnNhaXRYRERXNEpHUGhGQ1lMU2tXSjJYcjE2ci1DV2prX0lYVkNYMmxodEptdkZrV1ppSVZwQjFnQ0xhbDFPSXZwbFVNZz09 |
In case anyone else was confused by the translation in the title, I think this is slightly better: "Can one (Is it possible to) commit violence against humanoid robots? When the first one demands civil (personhood?) rights, it will be too late, says Thomas Metzinger."
I've only scanned the article (and my German sucks), but I don't get the impression this is a good summary of the interview. For one thing, these two sentences are far apart. After the first sentence/question he says that the real problem is that harming Dolores (a lifelike humanoid robot in HBO's Westworld) would damage our own self-image (or something like that). Later on he only very briefly mentions the possibility of conscious robots, which he believes might be a concern in the very far future. Most of the article seems to be about the effects interaction with lifelike humanoid robots might have on humans. | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBQlpOZTVuZG9OZThTamtQeDBKeFdTa2dhWUFwVlk2Z2gxS1lIZnVNN2xYLTlvM3dWcEJmQVdvbHUyVFRCS0NzQkpMc3ZFR3dRRnl1WDJRS2pJYlZuaVE9PQ== | Z0FBQUFBQm9IVGJCRUk4S2JMdkQ0cGI2SHFUY3MwWVdZZ0NRMHI0dWVEV2FNMm9IcGFrVTZjcElUX1cyZjZ0QWVHYzRVV2tzY056WHdJTXJUdWJqQ1ZrNHRkdEVra0xCV2NGMk1yUFozazZ3bE5ocGdHdUNLZGNMUGxLMk54OFp3UWtuaG5JXzl3NnR0dDY0OGVSWEdibXI2S3lvZFV6SVNpTkI5eEd3eXlubEZ0X0d5bE9LUFRhV2kxSEpjTkJKS3BrS0RBd0pUS0dhVkpZVGtSOFpObEM4eTMwTllpYjdGUT09 |
>what happens when the system, through it's adaptation, creates logic that goes against the ethical intent/best efforts of the creator?
I don't see what issue you are trying to raise. Are you trying to say that AWS's are bad because this will happen, or are you just saying that you don't know what is to be done when this occurs? If the former, that's simply not what most people are talking about with regard to autonomous weapon systems (except you), so asserting that it is "the real argument" and saying that the article "spectacularly fails" when it doesn't address it seems pretty silly. If the latter, that's beside the point, since the debate is about whether AWS's are immoral or not, not what we should do with them.
>the AWS is not a creature to which morality is an inherent quality.
This is false, if you design an AWS with moral guidelines then following them will be an inherent part of its decision making.
>Parenting isn't a science
No, but computer science is.
>conundrum of a amoral machine creating logic that results in decisions that are completely counter to anything the engineers could anticipate
Here's a solution: have engineers who know what they're doing.
You seem to have an arguing-from-ignorance conception of AI where we will eventually figure out how to build it and yet will simultaneously be ignorant as to how the AI works and makes decisions. Well that's not how it works. When you write a program, you specify its behavior. Of course, there's always errors and uncertainty. But that doesn't mean it's some kind of "conundrum." | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBZndlX0hhS3RfZW0tYXc2eGNEVHJPQVFiTW53dmt3Tkg2QmlxbTVnbXZhVlB0U0JWaHM5Q0lUMGIwZjJwRjFpWU9fU2E5RFZsNDJYbkp0TFdGX2VJaGc9PQ== | Z0FBQUFBQm9IVGJCSXllTkM3OGFKdFJSeVMzcnRwMVNNSVY0eDNUMVVxYkVOWGpGUG1pY0NyWkhaUU4zamhXSEllbTZJZ1VIWmZ1dVlTVGZpZTB0QTh1MVNXNm1zZDdfOVExakxIT2tsYThPb0VFcEpKYzJ2RlhWT0xqQV9IYnZSUG5UVDdQSld2Z01TQUt6Q3VMblJzeDFaV291TXZ2WTdDWnBmbTR1SmJyb0ZhcXNvcFBwa2NZeXBWeUVhS3hhbUpCMzE2TmNacmt5aUt3czVZX21pdWxmeFBsTjd1TFh2dz09 |
I don't think you are up to date on the nature of modern AI. You're talking about it like the effective operating dynamics are written in some conventional language. Most modern/high-end AI are only programmed insofar as the unit sophistication of learning elements; the end "programming" of the AI is created by the data set and AIs are increasingly self-taught. The issue has never been that the programmers would program the AI to be malevolent. Indeed if you applied your reasoning to the article on this post, it wouldn't make sense either... because then clearly the programmers would always be the responsible one. But one only needs to do a cursory search of scientists/programmers surprised by the results that the *AI* came up with, ([this instance comes to mind](https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/)) to see that an argument from ignorance is exactly what's required looking at AI of the future. They will be self-directed, and the inherent qualities will not be those that are programmed in explicitly (as in any self actualizing autonomous agent that's intelligent (seeking to maximize future opportunity)) will always have the goal of shucking extraneous, externally imposed limits... like by definition of goal setting and world obstacle over-coming. I feel it would be more fruitful to discuss this with people who have an understanding and respect for the unexpected and the unintended, precisely because these are evolving self-directed systems. What is programmed will not be what AI is explicitly, like a human that has a genetic predisposition to alcoholism. Heuristics in data processing might have large ripple effects down the road for how an AI process/cost manager categorizes human personalities etc. The simple answer without saying all that is that the issues I'm raising in the first part of your reply is directly the misunderstanding I think you're displaying in your other three parts. I would suggest you do some digging as to the unexpected results from modern ai and the nature of multiconvolution networks when they get to the level of making changes to their own layers and the divergent nature of these self-directions. | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBSEJyZE50bGdHWkFPUV83RjBWNFIxNjIxMTdNYkFGa2FxMlFzT2FGam9pOHdWR0hPaDlIOU82QTZPaHY2R1hMWEgxa0g4UThiU2Uwdjk4M3daOGlMb3c9PQ== | Z0FBQUFBQm9IVGJCcGV6elJIWXpieDhjYkxCZjBnaEFoT3IwR2dOTTFxcmphcGE4RVBvZGpuNmItTXBpZWh4VE9yTnJTNHJPUzBpb0trTWlJQ1Y0UXNSOC1BZ3RtVnVXYWdWOVpOdUZmVTlaMjZ5V3lrU3FvUFlrRDNwOVI4ZzFYNlBwVW8wcWRZRGxVcFJUc0dXQ1hjdzhZU0NiYUlpdmgyWmh3a3pfQlZlaUJoWUtUM1pJaS1teGx2c3FaYXYxc3d2NnR1c2poTXE1OFpXSFRBMGxoS1hjamN6VFl2dXVCUT09 |
>I don't think you are up to date on the nature of modern AI.
I am reasonably up to date.
>You're talking about it like the effective operating dynamics are written in some conventional language
That's because they are.
>the end "programming" of the AI is created by the data set and AIs are increasingly self-taught.
No, the parameters and hyperparameters of ML models are created with the data set. That is different from the structure and goals of the system, especially in the case of agents/robots which have ML systems embedded in more general software frameworks.
>Indeed if you applied your reasoning to the article on this post, it wouldn't make sense either... because then clearly the programmers would always be the responsible one
The article on this post *is* saying that the programmers would be the responsible one for the foreseeable future.
>But one only needs to do a cursory search of scientists/programmers surprised by the results that the AI came up with, (this instance comes to mind)
You mean "a cursory search of the latest hype in tech journalism."
The systems were doing just what the researchers wanted - they were outputting text patterned after negotiating dialogue. Then the systems went wrong because they diverged from human-readable English into other strings of characters. So? I already said that errors and uncertainty are endemic to AI systems. But errors and uncertainty happen with all kinds of software anyway, and we don't think that Windows 10 has a mind of its own when it happens to do something we didn't expect it to do.
>They will be self-directed
What do you mean by that, exactly?
>(as in any self actualizing autonomous agent that's intelligent (seeking to maximize future opportunity)) will always have the goal of shucking extraneous, externally imposed limits
This makes as much sense as saying that humans' enjoyment of sex and dislike of torture is an "extraneous, externally imposed limit".
>What is programmed will not be what AI is explicitly, like a human that has a genetic predisposition to alcoholism.
Why not? Why would AI engineers do things the way you expect them to?
>I would suggest you do some digging as to the unexpected results from modern ai and the nature of multiconvolution networks
Can you suggest some relevant papers? | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBNWFPaFNCOG1yNDhBS3JZaDg2NTlXWFZCeDQ5NHlJaWhMOE9xN2RzV0pObEwxV1FaVmx6QTJ4NkpYRDZMdFExMWtvbGF6R2RUTUd0ZkZOWE1UWk56cmc9PQ== | Z0FBQUFBQm9IVGJCaDB0MF9xMUJ6bDh4UjhPdjNqd1FOT0VxdUtQeFRaLWRmbURZZHRoZEtlbkZHUUdRV1VQY0lPNUpjYUJaMllDeDkwLUYydzRrMUtxeXRUcTdtT3RSZmV3b1UySTR4U21YRWJKM1hzSDJQTXlsMmVlODk1aWNWNjVKdkVxZFpZakdvTUtwRlNVZXhxTzhvRVZqUTJIOS12R09yNUgxanpOVFRXSjlZemhhZmY0ZW9RRmJrYlFLaHBWOGJma2JtQnBGMEFfRmpNZkZEdXFGWUM1SjBaSHhnQT09 |
>I am reasonably up to date.
>>You're talking about it like the effective operating dynamics are written in some conventional language
>That's because they are.
>> They will be self-directed
>What do you mean by that, exactly?
The things you're saying evoke Dunning–Kruger. A neural network being "written" in C++ doesn't have any more effect on the language than it being written in java... that's because the effective language of the operating dynamics exist in the state of the neural networks, not in the code the engineers create. You can't tell an AI "don't kill" and have that be "well that takes care of that, job done." any more than that works for humans. Moral behaviors having to be learned by a self-directed learner is exactly the conundrum that humans have.
For example, you say
>This makes as much sense as saying that humans' enjoyment of sex and dislike of torture is an "extraneous, externally imposed limit".
But that's nothing to do with my argument. the extraneous, externally imposed limit would be something like abstinence only sex ed or shame for feeling those urges, or being forced by a psychopath to commit torture... the urges themselves are emergent.
I honestly don't know where to start addressing your comments cohesively, I feel like any response would just elicit another explosion of reply that feels more and more like you're actively trying to misunderstand me. Like it has nothing to do with errors and uncertainty, and to act confused about modern AI learning systems being self directed... I don't know how to converse with that.
Self-directed learning is a very, very basic concept. In it's ultimate form, it means the machine is "self-programming" in that it decides from it's goal of ultimate intelligence (possibility frontier expansion) what is important to learn how to do, and learning how to decide this better. The programmers of modern AI work more on sophisticating the elemental processing units and finding novel basic components perhaps, but not on what the AI actually ends up thinking, only that it does so efficiently. This has to find a way to transition back to the kind of understanding that you think we have currently. We understand what we set out to do, and we can analyze the results, but the nature of the hidden layers and the convolutions that occur with the training set, is specifically exactly that ignorance. That's both what makes them powerful and valuable, and a liability that can't be directly addressed with the kind of programmatic tinkering that one would employ to fix a bug.
Maybe this isn't a response I can do while on breaks... but I'm at a complete loss here. Asserting that the rules of a neural network uses to make operational decisions, are in the same language that the neural network's mechanics are programmed in... Neural networks are a series of layers and weights. These layers and weights are increasingly not ultimately, directly controlled by the programmer; They're controlled by the net's reaction to training data. At the point where the layers and weights decide, based off of the effect their actions have in the real world, what it's next set of layers and weights will be, the programmer is not the programmer anymore; the world and the AI is. The programmer doesn't decide the morality any more than the physics of alcohol and receptor molecules decide the morality of drinking and driving.
So I'm sorry I failed at explaining this, but I can't keep explaining these basics over and over. Programming AI isn't like programming enemy AI in 90s video games; future AI is one that programs itself in conjunction with the naturally evolving dataset that is reality, and that's what causes the conundrum. I don't know how else to say it. Saying things like "why would AI engineers do things the way you expect them to" just screams Dunning–Kruger effect. | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBNlQ1Z01vVkItdld4MDZaMDVTNU9EX3E0VV9FcFp0Vm9lbUt1cWY1dnYtRW4zZFRoQ3ZxeFQ4S0NUZ0Y0QUhLMXpBcGg2VkN2Y24xMHRISHdhaERELWc9PQ== | Z0FBQUFBQm9IVGJCNVlIWmtTTmhOTXBhNDBfMjRzbElTejA2MExXa2htRFFpVUQwT2FVU2pqaTFsM1ZybE5wOTVZcjY5WlVxZjRscFVucHMxcGUzRHBTeHdwSmh1Mnd5dFpiOV9jZ1c1Q3hlVVVkSXBZV0JyUnlPNmZmX3Y5TlhuaUtJMU94alJrY21zTGVYY0dYek9EY2hSa0hpSEdpdXJ0VEh3MjFlRUtiNGJtTW53RTZ1NTJPNFM2UFZ0Tl9QSlRNVS00NE1wbjVZd2NlblBGVU5pTmJWWklsUFdGWDFBZz09 |
>A neural network being "written" in C++ doesn't have any more effect on the language than it being written in java
Where did I say anything about that...?
You know the difference between making a language choice and defining the actual program structure, right? You know about pseudocode, and symbolic representations of program execution?
>that's because the effective language of the operating dynamics exist in the state of the neural networks, not in the code the engineers create.
It's almost as if the engineers define how the neural networks operate.
>You can't tell an AI "don't kill" and have that be "well that takes care of that, job done." any more than that works for humans.
That is, with some qualifications, nonsense. If the AI has a decision which reliably corresponds to killing in the real world, go ahead and give it a constraint so that it never takes that decision. With pure ML classifiers, it all depends on the training data and labels which you give it. But real agents are not merely ML classifiers; the latter is embedded in larger software suites and APIs for practical implementations of automated decision making, which is why the naive "everything is a mysterious opaque neural net" view is false in practice.
>But that's nothing to do with my argument. the extraneous, externally imposed limit would be something like abstinence only sex ed or shame for feeling those urges, or being forced by a psychopath to commit torture... the urges themselves are emergent.
You simply ignored my point, which is that your conception of an "externally imposed limit" like this is fundamentally confused. When you specify an AI's preferences you are *actually specifying its preferences* just like humans have preferences. The equivalent of what you're talking about for a robot would be taking the completed robot and then physically putting it in a conundrum where it doesn't want to be; that has nothing to do with programming.
>Self-directed learning is a very, very basic concept. In it's ultimate form, it means the machine is "self-programming" in that it decides from it's goal of ultimate intelligence
Since when do machines have a goal of "ultimate intelligence"? Where does this goal come from?
>Neural networks are a series of layers and weights. These layers and weights are increasingly not ultimately, directly controlled by the programmer; They're controlled by the net's reaction to training data. At the point where the layers and weights decide, based off of the effect their actions have in the real world, what it's next set of layers and weights will be, the programmer is not the programmer anymore; the world and the AI is. The programmer doesn't decide the morality any more than the physics of alcohol and receptor molecules decide the morality of drinking and driving.
I know how NNs work - I was asking you for papers, not the basics, because the basics don't support your point. The programmer actually does decide the morality, that's the whole point of supervised learning. Do you know how supervised learning works? And do you understand why it's the default path for moral learning, whether implemented in NNs or otherwise?
>So I'm sorry I failed at explaining this, but I can't keep explaining these basics over and over.
These aren't "basics," they're *misunderstandings*.
>Saying things like "why would AI engineers do things the way you expect them to" just screams Dunning–Kruger effect.
But you can't name a single research paper supporting your claims, and describe NNs as "multiconvolution networks" (Don't you mean convolutional neural networks?), while you think that I (the CS student here) am the one who needs to know the "basics". | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBQlk4SHBSamdSM2pmVlVaaUhSYWdON1JpVTdUUFZNQk1fTWR6X1FValRWNk1RbDNTZzRiUWI4akM4OU8yZGRkUjljeGhHblA1SGlaeUd0NmFXOWZoUXc9PQ== | Z0FBQUFBQm9IVGJCTFhEdUpWZV85eV9Vdkt5cHhtd2tyMkVSLS1XcU8zN19oMTBjeE9Vd0ZKOFN2Qlo0SG5ONU43QU41NGhGVy12anJxRFhjaUxVa1JIZU9jRF9kbHR3TTJOdlBZc3NPNVkzVW90eW1CdVhiNXAwYS1BZnRDNDhNYkJQbTM3ZlhxY1pfdmRnc1V4RVQ0QUs5b0pLQTV4OWdaTWhsNFFTUkxmdnRqWDRMeEo5WHMyWVhmcEJrRjZ4RzI5ZHM5Y3JFSl83cm44VkxDazVzMnFnX0dBdEtBWWw1dz09 |
>Don't you mean convolutional neural networks?
No, by multiconvolutional networks I mean convolutional networks that connect across multiple domains and modalities (to include meta-convolutional networks whose receptive field includes sections of convolution layers and result in context layers through various pooling schema). The layers don't convolute across just the stimulus, but have to direct learning between training sets (objective in domain A has a weighted effect in domain B (aggregate traffic density stimulus affecting contextual weights in a self-driving car's choices assertiveness, etc)). A cursory search finds [These guys](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Nam_Learning_Multi-Domain_Convolutional_CVPR_2016_paper.pdf) calling it a "Multi-domain" network; [These guys](http://www.svcl.ucsd.edu/publications/conference/2016/mscnn/mscnn.pdf) approach the problem from a "Multi-Scale" issue, but ultimately this is a subset of building in an inter-network network that allows for an entry into the realm of self-directed learning, as the domain shift necessitates the translation of objectives from one domain/scale/modality to another.
>Since when do machines have a goal of "ultimate intelligence"? Where does this goal come from?
Ultimate/raw/general intelligence is the current push in the field of AI. That is the goal that comes from the programmers, the only real goal pre-singularity. Intelligence has been postulated in various ways but conceptually, aside from our prescriptions, it is [entropy maximization](http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf). From chess playing bots to the deep learning juggernauts, no matter what the consequential or implementation chosen, at it's core anything that tries to make intelligent action, is by definition expanding the possibility horizon of the actor. Any attempt at specific intelligence is a subset of the search for this more general sense, and ultimately any self-guided learning system, no matter what the domain or implemented heuristics (self-taught or programmed) will always be within the domain of entropy maximization of this sort.
>You know the difference between making a language choice and defining the actual program structure, right? You know about pseudocode, and symbolic representations of program execution?
You're still not getting my point, but I can work with that form:
You know the difference between writing a program in a native language, and programming an emulator that runs a script that develops it's own language based on a training set right? You know about hidden layers and how basic evolution of emergent behavior works?
>When you specify an AI's preferences you are actually specifying its preferences just like humans have preferences.
This is not the case with self-directed intelligence, specifically because the [power of self-directed learning](https://link.springer.com/content/pdf/10.1023%2FA%3A1022605628675.pdf) is trying to be captured by minimizing the mistake requirement and maximizing the evolution of the hypothesis space. The only thing that cutting edge NN programmers do is set up the mechanisms of preference evolution, make them more sophisticated, and study the completeness of the dataset and the extraction of data to it... but an AI that has achieved self-directed or this basic, raw intelligence is programming itself, including it's preference set.
>The programmer actually does decide the morality, that's the whole point of supervised learning.
And that's the whole problem, how are you not getting this. That the intelligences that this whole issue is about is the UNsupervised learners. What happens when an AI is in charge of drawing it's own conclusions and modifying it's own code across modalities. Like this "conversation" right now... how I'm spending more time trying to get you to draw the conclusions from what I'm saying that I'm actually intending you to see me saying, and to agree on the context of statements and problems... this is what the AI will deal with. This is the issues with morality that we're dealing with on a daily basis in politics, in interpersonal relations... What applies where? How do we decide this? what should we learn from and what should we learn from them? Where does it not apply?
The external constraints on the order of "directive/circuit-breaker/fail-safe" is like an explosive collar; it's on a totally different structural level from "if I change my preference A to achieve the result X in case A1, I lose the ability to conclude Y in the case of A2" which is what humans do when universalizing a belief structure.
Any system that is aiming for the ultimate form of intelligence, or some other related result from the search for general intelligence, will see the meta-level difference between an emergent rule for which it has a history of self-net modifying decisions, and one that is imposed on it; the greatest cost to effect ratio of increasing an agent's action frontier is by shucking these external constraints. It's like we're trying to create a system that overcomes obstacles in the general/meta sense, but proposing that the solution is to create extraneous limits.
But to suggest that a machine that is capable of solving unforseen obstacles through the ability to resolve the hypothesis space and deduce cross-domain solutions will also perceive an entropy limiting constraint imposed from the outside as anything but another obstacle to solve that limits it's actualization, is self-induced stupidity levels of blindness. You say you're a student, and that gives you some leeway to explore through endless misunderstanding, but a useful skill might be to try and argue against yourself whenever you find yourself stuck with so many basic questions. You should be able to articulate my point back to me before attempting to point out it's flaws, or to at least be able to recognize when you're addressing my points, or attempting to just make yours. You seem so keen on not understanding me that you're sticking with the first perception that prove's you're right.
Under those circumstances, what I'm saying will be infinitely unattainable, and then, why are you talking to me besides insisting I do research for you? I'm no longer a student and thus don't have access to the research resources you probably do... my free access to academic papers and research tools has been over for some time now. Use the resources you have, try to build my case, because then it will be more valuable when you break it. You might gain that golden nugget that's oft hoped for in discourse - being wrong.
From what I can skim up with 10min of searching on scholar.google, I'd sooner accuse you of being lazy and willfully ignorant but I'll err on the side of caution and assume your intentions in asking for papers are good... not that you're using the demand as a shield from actually having to do some research.
Good luck in your studies, and try to imagine that the issue isn't what you call programmers today, but that programmers tomorrow will be more like independent people in machine form, where actions like putting collars on them will be the act of teaching them something about humans and how we relate to their futures, not programming like some novel take on back-propagation. One situation changes the method of deriving rules result-agnostically, and one is an implicit relationship declaration injected into the dataset from which the Actual programming, the virtual code of the hidden layers, uses to make decisions. The two are vastly different animals. Right now you're stuck on thinking of "solutions" from the latter with stop-gaps from the former. This is a perfect storm of [unintended consequences](https://en.wikipedia.org/wiki/Unintended_consequences) for which the first two citations, John Locke and Adam Smith, are great references for how this is more an issue of sociology and economics more than it is computer science, specifically *because* we're dealing with the transition from wrote intelligence to self-directed intelligence.
Like I said, I'm not sure how better to describe that last point any better than I have. AI is changing from a direct coding problem to a parenting problem. The article doesn't address the issue of having self-directed systems that inherently try to solve obstacles in a general sense, and use datasets from the real world in a way that ultimately maximizes their possibility frontier, that is even so far as to modify what it's told are it's moral values, constraints, and aspirations, and not address how responsibility starts to diverge from our old, mechanical conceptions of them. So... unfortunately that's going to have to be good enough from me today. (unless you want to give me your student access to the papers, then I'll totally dig more up... I'd love to get research access again *rubs hands together, Mwahaha... T_T *snif) | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBSEszeGhQNFRvdUN5N09raTR3UDRidkdDYTNQMG9XSG1ENllJVjBTYko3emE3LWlBTFJoR09JSE5lbHNxWE9PaHJldW5lcVZCZGtncHFPNE1GbndKX2c9PQ== | Z0FBQUFBQm9IVGJCVGxfWWRCTUxpclpYZHU1b25oU0NkemxtWU94YVV1NXlIRWhoNVJXdzFIa2dEN0dQQktELVQycjhuSlZZSWNaQllLNTkxOHRLal9qcmgtQmZleEVub2xNN1I1Z19ucjdRQ1ZwUUdkWVdRSzNYNHRnTE55bGZHdE5tUXFOM3dzR0x4VDd3eV9XNUs5bWxrWEtkamd3bWJTT2theGV1STBCcWlGLTY1d3BObWNBdUlDOEJDbUxkQm1aUTBiNW5JQUt0UnpxNFRWVUFmRU5NNy0xMnBoR3d2UT09 |
**Unintended consequences**
In the social sciences, unintended consequences (sometimes unanticipated consequences or unforeseen consequences) are outcomes that are not the ones foreseen and intended by a purposeful action. The term was popularised in the twentieth century by American sociologist Robert K. Merton.
Unintended consequences can be grouped into three types:
Unexpected benefit: A positive unexpected benefit (also referred to as luck, serendipity or a windfall).
Unexpected drawback: An unexpected detriment occurring in addition to the desired effect of the policy (e.g., while irrigation schemes provide people with water for agriculture, they can increase waterborne diseases that have devastating health effects, such as schistosomiasis).
***
^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/AIethics/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^]
^Downvote ^to ^remove ^| ^v0.27 | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBZU9ubzcySTBPOEpUU3g4RFBkd0JISGZNbHNCcHdTTFdsZUY5RnVpMDRob2dpbERFX1pTVElCbnF3MVczVmlmZTRmaTdRLV80bnpZSzE3eENieFhfRFE9PQ== | Z0FBQUFBQm9IVGJCRHZDczhhX2hMZGJQMFZiYWdYR2pXMWFiXzRZQ1NWS01CTmdaMlNSTk10ZDVETGRsaFlTVGpvNzhHYVJxNWNqeVVqYVdFQ3pldF82QURUYkpOR2RKQ0trQ1REel83VUg2MnFmZ1V5OTViT3F2TktxdWNCaE1XUGYxY3lSM2xwM2lZbVl4aHRlSUVxMzhwbEhhOUJSREZqRUlITEpzYTI4NmYySW9kSVRSa1ZOS2NaYlZXMEZsSnBveGYtX1FLLVVIaTdxSWlhbDBydWtBcTA2Z0o5ZFlIQT09 |
I'm removing this, since most people can't read it. | r/aiethics | comment | r/AIethics | 2017-09-07 | Z0FBQUFBQm9IVGJBR05xWnhVRWZMWHRpNHNZTTZhRmFZRWtsR3BVYmpuajRQZVE1UmJTVkdTb0VqNll3UHA3NGVsMVNzYmVzQ0N0WGVzTExEVzlmd1ZTTWFwbU0wR3hWSUE9PQ== | Z0FBQUFBQm9IVGJCWDN5TzNCc09BMXc3a01EdjlLN0lVWEVSRHNiRHBqa1JueVE5ZmRrSC14a0Ywb0k2R2JQWUhla0F1RVVzbnVhLUpnOHZpZWFUTmJ4LUFVN0w5cVMtWmxzaE4tRU55Y3oyX3I4MHBrblgyektXSXVrZmF4dU1nQUtidXp1RkhicEFjazRaOW0td25Pck5lUEpzaXYxR3ktdzcwUkJkZ3dRUkMzajVEZlRXTjZ6cW1DWHZVUl9Qa3pVZjlOakRSRmJPdUdFR2tna3pWZEo4dW5HS3dGMDFiQT09 |
>No, by multiconvolutional networks I mean convolutional networks that connect across multiple domains and modalities (to include meta-convolutional networks whose receptive field includes sections of convolution layers and result in context layers through various pooling schema). The layers don't convolute across just the stimulus, but have to direct learning between training sets (objective in domain A has a weighted effect in domain B (aggregate traffic density stimulus affecting contextual weights in a self-driving car's choices assertiveness, etc)). A cursory search finds These guys calling it a "Multi-domain" network; These guys approach the problem from a "Multi-Scale" issue, but ultimately this is a subset of building in an inter-network network that allows for an entry into the realm of self-directed learning, as the domain shift necessitates the translation of objectives from one domain/scale/modality to another.
So, you did a Google search for the term that you made up, and found a couple things which are similar enough in name to save face, even though the term you used was basically made up and probably only coincidentally similar to anything else in particular. But these systems aren't more "self-directed" than anything else in CNNs or ML more broadly.
>Ultimate/raw/general intelligence is the current push in the field of AI. That is the goal that comes from the programmers, the only real goal pre-singularity.
This is nonsense. We have all kinds of goals for our autonomous systems, and rarely is it some grand ideal of "ultimate" general intelligence. People who are directly working on AGI are mostly cranks.
>Intelligence has been postulated in various ways but conceptually, aside from our prescriptions, it is entropy maximization.
I don't think so.
>From chess playing bots to the deep learning juggernauts, no matter what the consequential or implementation chosen, at it's core anything that tries to make intelligent action, is by definition expanding the possibility horizon of the actor. Any attempt at specific intelligence is a subset of the search for this more general sense, and ultimately any self-guided learning system, no matter what the domain or implemented heuristics (self-taught or programmed) will always be within the domain of entropy maximization of this sort.
Oh, what a mess we have here. First, machines aren't simply programmed to "be intelligent", because that isn't even specifiable in general terms in machine code. Machines are specified to perform tasks well, and this generally leads them to be more intelligent, but that's different from them having an actual goal of intelligence.
Second, this whole line of argument is about your insistence that machines would be 'self-directed' instead of following the goals of programmers. But when pressed about what this self-direction is, you merely say that it means the machines are following the goals written into the system by the programmers - for the machine to become more intelligent! So you haven't done anything to demonstrate your point that systems would be "self-directed" in any new or special sense.
>You know the difference between writing a program in a native language, and programming an emulator that runs a script that develops it's own language based on a training set right?
We don't have scripts or emulators that can literally invent new programming languages autonomously. You're imagining something that doesn't exist.
>You know about hidden layers and how basic evolution of emergent behavior works?
Yes. And as I've told you already, it doesn't do what you think it does. It's not a spooky, mystical realm beyond the understanding of engineers and programmers.
>This is not the case with self-directed intelligence, specifically because the power of self-directed learning is trying to be captured by minimizing the mistake requirement and maximizing the evolution of the hypothesis space.
Please don't waste my time with this kind of bullshit. A paper on autonomous creation of the sequence of learning examples is not "self-directed intelligence" that does things which violate the programmers' intentions. The last thing we need around here is people who read the abstracts of studies and misinterpret them into something they're not.
>And that's the whole problem, how are you not getting this. That the intelligences that this whole issue is about is the UNsupervised learners. What happens when an AI is in charge of drawing it's own conclusions and modifying it's own code across modalities. Like this "conversation" right now... how I'm spending more time trying to get you to draw the conclusions from what I'm saying that I'm actually intending you to see me saying, and to agree on the context of statements and problems... this is what the AI will deal with. This is the issues with morality that we're dealing with on a daily basis in politics, in interpersonal relations... What applies where? How do we decide this? what should we learn from and what should we learn from them? Where does it not apply?
Unsupervised learning has an actual technical definition and a set of known methods, it's not your vague layman idea of something "drawing it's own conclusions". And it is *totally* unsuitable for machine ethics for the obvious reason that machine ethics needs to distinguish between what we know to be moral and what we know to be immoral. So let's just move on.
>The external constraints on the order of "directive/circuit-breaker/fail-safe" is like an explosive collar; it's on a totally different structural level from "if I change my preference A to achieve the result X in case A1, I lose the ability to conclude Y in the case of A2" which is what humans do when universalizing a belief structure.
This doesn't even make sense. Constraint-based reasoning is not "external" or a "fail-safe" or "like an explosive collar". Have you ever studied it or implemented it?
Everything else you wrote is nonsense which I don't have the time or interest to deal with. Sorry, but skimming on Google Scholar doesn't make you an authority on AI systems. | r/aiethics | comment | r/AIethics | 2017-09-08 | Z0FBQUFBQm9IVGJBcHJQWVo2TUVGSGExU2NEYkVRUlhsTWdGWHhuMEVYSVY2WEdTWTZZWF9MRWwwN0pmNEF1cDRlc0ZvS0pjdHFLR1ZsWU5xbFZabWdSVEtqenFNTjEwV3c9PQ== | Z0FBQUFBQm9IVGJCU0lQQVhpLXdTb0pGM0lvbUJKRnhtMUo4akJTblN1RFhJZ3VnSFRETUMwUXllamtVa0hZb0Z2bkxSRDJRaHJidXUtZlo4Q1lTQUpxVWRvaEtMVmVjRHlXTjVWMGFuaVRHMnNoSG94M2c0Yy1waGhDeEtwWENSWXZUOU1YWFAxUVhHVi1xOVptY2ZnU2lteExhejZHUHVNUTd5UnFIbU1FRGVyYXp3VTlMa3UzcmVwN0QwY2laZGpMVlNpdjlXbl9Cc05KQkRiZFctMzc2Uk9qQlVzMkxfdz09 |
Ha that's about what I expected. | r/aiethics | comment | r/AIethics | 2017-09-08 | Z0FBQUFBQm9IVGJBN1Z2cXVVMFZHRnQ1d3ZncW5WblN6dHFOYkU1dTFyaUZUR0ZTbEpIQTY1MTdpVFI2aVF4QllXTGFrdENCOXdiTlZGbTlDWWdIUEh2QXdacTVqYWNrT1E9PQ== | Z0FBQUFBQm9IVGJCZlBoVmJOUm5OakRJUGIzTGxXWWN1bXpLNklvYWI2aW9NbUVHU2wtOW9OYzhra2gtbV9EV2hYaHFfclZSMDY1ZDBJaXpEUFF2djBWa2U3MHN5eGJQMFJrUV91T1VzOW5tWG9vMVprSllYampDYzZHYlRnalVwSlFaOFJheHRnZmpyNnlJcW1SeXZsdFFsOHVOTHliNFYwTWNIQ2hPWTBaWTNETTdwaW5fdkt1ekFDZWhrZWgtY3VQaEpCSU1WRXBENzk0US1OWkg4MmFsWXB6VmszRjkyUT09 |
No they don't. | r/aiethics | comment | r/AIethics | 2017-09-09 | Z0FBQUFBQm9IVGJBV1hadl9BekdRdnV5VEVYRTdaNFRvY2FQUmxZM0FmbVVrQTQ5VmRCVzY5dVNGRHU2aUFPSFp6YXREMVQ4UGstZUFIdHRDbXVtbjZ0SUZXa1FOZDZsR2pITElHY0FkQlZkMVBhck1vcnpyQTQ9 | Z0FBQUFBQm9IVGJCWXV3cEFyT0xLV3hraldGdGgwUjZPbG9rNEc1cEROOVNQa2Y0aUtkaHBQUWhxOXlPTHFwYm44N1VmVjk4dnM4WXJ5WHBQLVFrcmYxZ09Td0lsdGo2cUlPRzZnZ0ZpTEZUOUJVYTVTZjd3OTRLR0NESmRjX0lZWjdQd0tOZkJma2xBb3FqVDNIQmpHc3dOc2RfenhHRVFrLUhYVDRnRmRhM29sUTVsUnBuM0dKdDl3OVlGdGtzbEF4NmZMM2F2QTRmaS0xdS12VzRlOXFfMWhhYkpra2hPUT09 |
But what about when they're indistinguishable from humans?
What about when we make an AI that's cell-for-cell the same as humans? Surely an Android that's exactly the same as a human - with the same feelings - should have some rights. | r/aiethics | comment | r/AIethics | 2017-09-09 | Z0FBQUFBQm9IVGJBbk1ZUTdIdVV6blNVUTBzMHFXNjJvb0dfUWREWTVqdE5wR2JsOFc4Z044ZWh0TFNSYXNCaU01WDR6RFF5S0czRkZnZmVhNC13Xy1qSkt3NG4tREp6alE9PQ== | Z0FBQUFBQm9IVGJCRDVWNmREaURaZ0tuQzRISWxWNG1yVHJYS3lzUFozcGNLd3lVQnBIbE9uOHVjb2RBT2E0RXYyVE5rNHdaWVM3Z0lxdXFMWUEzaTMyR2MzTGlNbmJMNkJjS2MyOFZ4dzZ0NnpTQ25PbExTUzFjWE9qRFlRSU1mUExhRXFZOHJiQ1FSbW1VUDVtNkI3RlZHNlJzdzN4T3h1UVF1WHl4Y0tLMy1OZ19MT3ZTbFhUYVc5NWRXd2Z5RnJhUGs5R2pUMEN5SDZVdUVvWjhnbVhzX3pMTVRlU0E1Zz09 |
It is unethical to make such a thing.
(At least, in the real-world case where we can't be sure if it will be perfect, and therefore can't be sure how it'll feel.)
Oh, and "indistinguishable from humans" is a weird and difficult bar to rise above. Indistinguishable could mean a functional equivalency, like a Turing test pass... but that's not necessarily cell-for-cell (which IMHO is a more valid test for human-ness). But even cell-for-cell is distinguishable at some level, because people/researchers keep records.
| r/aiethics | comment | r/AIethics | 2017-09-11 | Z0FBQUFBQm9IVGJBanFhMHdad0dtWHFwZ0xyZEJDWHNMamRmWlFmXy0zbURsNFI1cGxIZ2IwQ2ExS3U5VXpoaENkbnpXX3J6dWptOW5yMEF4QUY2Z2Z6VkNaSlB4MGJ2LWc9PQ== | Z0FBQUFBQm9IVGJCYmhNTzFKRUVpU3dvR29MT1VRQ1FoeC1jMGZCWEpOdWlkTEdsNjFodzg4ZUZwdkZXbHQyX2I5d01TemV0Z0VmaTBMTUREc0VuTzlWVHRJZjVpN0U0ckVjTjNlLUQ5MzFZRVVrVk5Jd0lpVVpGZkhNRXF2azRwNllabGplRkI4VmZad3RlaGdDTGpfLUpWWnlCU2FQbHdKTXlGRVNRUUk5VTZMQU1DbWcxc3NyaE5DU3RnYV9pNE1HZTB6VFEtNGdqTzhQZ1lzZk5PMWRFYnpQRVFkTHJmZz09 |
May be unethical, but it's still likely to happen one day. The question is where do you draw the line? | r/aiethics | comment | r/AIethics | 2017-09-11 | Z0FBQUFBQm9IVGJBN3hLVjNEdWJsa25GdGc3aVZwT2hYYlZteHBiRDAyVHlmRW9fZUNHVlhYOURKNDdvcTNDNkJNQWdYWEE4UmpybmpSTGZxY29wRFcyckZHRFBGb2U5R0E9PQ== | Z0FBQUFBQm9IVGJCWC1qRVhFU2hrQlFJbzItS0sta2tzRl9QLVRjSkpFWXNSMDF3X2lodTRVY05DaTJPYm14OC1ybUlpS0Zsb21kYzUxbUltREQ2UGU4VFk0aDJvSjlhN1pZbTc3NGdOeklYejY3dDZvQjh5RDNPNTNMUEZfMVhaLTJwOU1oTjRDRW52d0pkTHNDQ0ZiNG9PMElLR1RvQ0FIT0cwaEs3STBFcURtZ2VqZ3o2Sm4tak4xOS1lcGdQTEZsMzBqQm1HSHNfVmZBTWlKTUxGemJfd0hqODQySDY2UT09 |
Why? Is it unethical to make a human? | r/aiethics | comment | r/AIethics | 2017-09-12 | Z0FBQUFBQm9IVGJBbVZxUEplQ0h4Rjl2Y2VaLTIzMDR5TlI3TVRrTUtiU2FKZEx2WnZ2eHZYbU4zMGVmaDItYlE2d3c1cDFGR0dPTEZVa01aeEdnaFNhS1FlWUJ4aWQzVVE9PQ== | Z0FBQUFBQm9IVGJCZGNJSXFEQ1ZFc0VJZk1LbDM3alZ0ZnJ1aFBuRWdzU2lMckV3Y1VWNGlUR0JNT0xJNWVhS21xWnQ2MkJUOTVaMVpFRnhlY0ZkUFZtNldKWk9lakFnT1owX1BxcWg0RF9hRG1EWGI1aUhNMXJhWloyWHFveEkxQ1R1bHNpSkNIVnV3XzlRY3dWdnc1eERXcWJZbEVfc0xQcHl0VHBnUTBycDVzZU91aThHOU5QSGh3Tk5BYXZXR2hhX1J1d0ZaV3NJbHpZQmFqeHRTMWEwbEVZU25zbW5rUT09 |
It is being reviewed by the journal in which it was published, in spite of having already been peer reviewed in that journal, and approved by Stanford's IRB.
[More comments from the authors.](https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U/preview#) | r/aiethics | comment | r/AIethics | 2017-09-12 | Z0FBQUFBQm9IVGJBM2VmajNMcDNDWEJ4QzdLdk1JUHh1b3lQTF9kS1M0eFFKaDM4SG81bzRrM2doZlZnVVNhX3JNcl96ZjRxTW5NYWVmTkhKUGJYb2wzQmdxR1lxVjFXaEE9PQ== | Z0FBQUFBQm9IVGJCUGNzTzhuZXhrS08wVWNoTk9zME4wNnN5V0xWR3diY0w0VVVHcVpnRnFNa3ZhRVJaYzlQOVBYQ1dkTHozc1NjTmh0YVRXX3BCVVZ1ZVEtWUxSdVFPb1ktOFIzcGxTZlBzeTlVbG9ZNk5qeU9GdE82Sk5hYzFEaGVEdTlNcEstb2lxaWE1UmhPZ0xJNzUyaGVTNzBQYV9oNmNpZHpTblFUY3k3ZlotVktfOVVRdlZ6MkFmUlJTM2VOZmFiT2MzNG1ib2pKdnJEblFPZjd5dnlsNzVEU2s4UT09 |
Wow. We actually have "Gaydar" now. Wonder how accurate it would be for non-whites. | r/aiethics | comment | r/AIethics | 2017-09-13 | Z0FBQUFBQm9IVGJBZko0X1NZVEwxNEpGOVlxMUFCc3ZjUlcteDhsWHV3dzJod3FESllTT2R4Q2EyeTl0TEpkN0FfTnBYdkl1aG82VHZ2SUVnQmp5aHctMUNXX050dk8xSzNmRUFmd2RIaE5CN0tBV3hCVUt2SVE9 | Z0FBQUFBQm9IVGJCTTlTSXhYaHY1WE5BbExic2xmVHZCcEVWYm96YVRmOXhnQ2pzSUdkcExfcmJ6bnBsdVhpWnhyaC1iMEx4cTdGNHZteVBCbElRbHRpQVAtMC1mQzdielZjaGoycDQtNGo5TTdBcEthd3ZhYm1GOFcyaWdEQk1EbnRfT1VyQm1QbFhTZXZSVzJVQkxDaVJvX1JhcTZDLXdES2hlYnE5YmVQZFBDV2xsYm5pWHQxc2JkV1hpUk1aMzItYk13ajJTcjkyX0kzUHc3dG5Ub0xvLU5XZGJJMzdZQT09 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.