Matt: Dangerous AI? (Introduction)
Do you lie awake at night wondering about this?: > > http://www.washingtonpost.com/opinions/matt-miller-artificial-intelligence-our-final-in... who works in the field of programming has thought about it. And its a ripe idea for movies, Terminator and The Matrix both saw AI as the ultimate apocalypse for our species, with the interesting twist that in the Terminator, one of the machines was co-opted to save it. And older greats, such as "A Space Odyssey." Battlestar Galactica. -To answer the main question, No, it doesn't keep me up at night. -Do I think there's a great potential for a problem? If we mismanage, yes. -My time in working in information security has shown me that typically, systems designers *only* tend to consider two factors of their solutions: -1. Solving the problem at hand. 2. Solving it within given time constraints. 3. Solving it as elegantly as possible given 1 and 2. This factor is optional.-Security (if its even considered at all) tends to take a back seat. Look at Target, who potentially lost the account numbers of 40M customers. Long-term maintainability is something that is also often-missed. -If we really do get to the point of even limitedly-intelligent robotics, I'd be far more concerned that the designers left open security-holes that could result in extremely destructive consequences. Imagine a home-health robot, connected to the internet, and getting hacked in such a way that for the person its supposed to care for, the insulin dose is upped by two orders of magnitude. How would an intelligent robot be able to tell the difference between instructions IT generated, and instructions generated by an outside, malicious actor? -http://www.youtube.com/watch?v=MaTfzYDZG8c-I think that in the next 100 years or so, we'll be able to mimic a great majority of human behaviors, but "true" AI is something that I do not believe will be possible until the underlying hardware more accurately models a brain.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Complete thread:
- Matt: Dangerous AI? -
David Turell,
2013-12-19, 14:22
- Matt: Dangerous AI? -
xeno6696,
2013-12-21, 19:06
- Matt: Dangerous AI? -
David Turell,
2013-12-21, 19:56
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 04:26
- Matt: Dangerous AI? -
David Turell,
2013-12-23, 15:48
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 17:10
- Matt: Dangerous AI? - David Turell, 2013-12-24, 01:08
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 17:10
- Matt: Dangerous AI? -
David Turell,
2013-12-26, 05:16
- Matt: Dangerous AI? -
xeno6696,
2013-12-26, 20:26
- Matt: Dangerous AI? -
David Turell,
2013-12-27, 00:12
- Matt: Dangerous AI? -
xeno6696,
2013-12-27, 03:40
- Matt: Dangerous AI? -
David Turell,
2013-12-29, 15:00
- Matt: Negative thoughts about AI - David Turell, 2014-01-02, 15:08
- Matt: Dangerous AI? -
David Turell,
2013-12-29, 15:00
- Matt: Dangerous AI? -
xeno6696,
2013-12-27, 03:40
- Matt: Dangerous AI? -
David Turell,
2013-12-27, 00:12
- Matt: Dangerous AI? -
xeno6696,
2013-12-26, 20:26
- Matt: Dangerous AI? -
David Turell,
2013-12-23, 15:48
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 04:26
- Matt: Dangerous AI? -
David Turell,
2013-12-21, 19:56
- Matt: Dangerous AI? -
xeno6696,
2013-12-21, 19:06