PDA

View Full Version : Facebook AI "Bots" Create New Language And Are Shut Down...



SteyrAUG
08-02-17, 23:23
http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

"The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans."

So it was nice knowing you guys...

Moose-Knuckle
08-03-17, 04:43
http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

"The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans."

So it was nice knowing you guys...

Makes you wonder what they don't tell us. FB is just one of many developing this tech currently.

Gray goo isn't to far from a reality.

moonshot
08-03-17, 06:48
Anyone remember Colossus: The Forbin Project?

Renegade
08-03-17, 07:15
http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

"The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans."

So it was nice knowing you guys...

Sounds like a load of marketing horseshit.

SomeOtherGuy
08-03-17, 08:17
http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html
"The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

We should give them laser weapons and a fully automated factory. And give them Tay as a sidekick!

http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/

Outlander Systems
08-03-17, 09:07
Most likely a black-box system; which, IMO, is a thing we should NOT be ****ing around with.

opngrnd
08-03-17, 09:16
Most likely a black-box system; which, IMO, is a thing we should NOT be ****ing around with.

What is a black box system?

Big A
08-03-17, 09:19
Most likely a black-box system; which, IMO, is a thing we should NOT be ****ing around with.

Could you elaborate or point us in a in a direction for more solid info?

Outlander Systems
08-03-17, 09:45
Cliff's Notes:

Generally speaking a Black Box system involves a series of inputs, and observations of outputs; however, the observer doesn't understand how the outputs are obtained. This is generally typical in applications like self-driving cars, etc. While the car may be able to drive, it's very difficult to determine the processes involved in how the car is reaching its conclusions/actions at a given point in time. In essence, it is a system, whereby the output result cannot be audited to determine how said output was determined. Take the OODA loop, remove everything but the A, and that's what you get.

A good analogy would be if I gave you a math problem. You give me an answer. The answer may be right, or it may be wrong, but you didn't show your work and I have no way of knowing how you reached your conclusion regarding the submitted answer.

This technique is often used in Machine/Deep Learning/Neural Networks. You ultimately end up with a inscrutability problem from the results provided. There's a level of opacity inherent to these systems, even to the designers/programmers creating them.

I personally don't think the risk/reward ratio is skewed enough to justify these techniques, regardless of the perceived benefit of the outcomes.

We are currently in the dog days of A.I. Summer, and the A.I. "Problem" is being jack-hammered at by simultaneous multiple approaches. Situations like this create circumstances whereby the innovation outruns the ability to asses the long term cultural/social/human/envrionmental impact of these technologies.


Could you elaborate or point us in a in a direction for more solid info?

Hmac
08-03-17, 09:53
Anyone remember Colossus: The Forbin Project?

We don't have to go any farther back than Terminator series for a good sci-fi example of AI gone amuck.

I believe that AI as a potential danger to humanity in some way is, ultimately, a very likely scenario.

Doc Safari
08-03-17, 09:54
They may as well go ahead and name AI "Skynet" since it will end up trying to exterminate us.

skywalkrNCSU
08-03-17, 10:37
Cliff's Notes:

Generally speaking a Black Box system involves a series of inputs, and observations of outputs; however, the observer doesn't understand how the outputs are obtained. This is generally typical in applications like self-driving cars, etc. While the car may be able to drive, it's very difficult to determine the processes involved in how the car is reaching its conclusions/actions at a given point in time. In essence, it is a system, whereby the output result cannot be audited to determine how said output was determined. Take the OODA loop, remove everything but the A, and that's what you get.

A good analogy would be if I gave you a math problem. You give me an answer. The answer may be right, or it may be wrong, but you didn't show your work and I have no way of knowing how you reached your conclusion regarding the submitted answer.

This technique is often used in Machine/Deep Learning/Neural Networks. You ultimately end up with a inscrutability problem from the results provided. There's a level of opacity inherent to these systems, even to the designers/programmers creating them.

I personally don't think the risk/reward ratio is skewed enough to justify these techniques, regardless of the perceived benefit of the outcomes.

We are currently in the dog days of A.I. Summer, and the A.I. "Problem" is being jack-hammered at by simultaneous multiple approaches. Situations like this create circumstances whereby the innovation outruns the ability to asses the long term cultural/social/human/envrionmental impact of these technologies.

Depending on the algorithm you can understand which inputs have high importance to the output but you don't have a direct relation like you would in say a linear regression problem. With linear regression we could say every time variable X changes by 1 that makes the response change by Y. With some of the more "black box" algorithms we could say, variable X has a strong importance to determining Y but we can directly assign a numerical value to it. It's not quite as scary as some make it out to be.

Outlander Systems
08-03-17, 10:39
"Abstract: One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity." - Steve Omohundro

https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf


They may as well go ahead and name AI "Skynet" since it will end up trying to exterminate us.

SomeOtherGuy
08-03-17, 10:48
We are currently in the dog days of A.I. Summer, and the A.I. "Problem" is being jack-hammered at by simultaneous multiple approaches. Situations like this create circumstances whereby the innovation outruns the ability to asses the long term cultural/social/human/envrionmental impact of these technologies.

Kind of like how some theoretical physicists in the 1930's, speculating that fission might be a thing that could generate power, wouldn't have anticipated the H-bomb, ICBMs, nuclear winter or idiot North Korean dictators?

Doc Safari
08-03-17, 10:51
Kind of like how some theoretical physicists in the 1930's, speculating that fission might be a thing that could generate power, wouldn't have anticipated the H-bomb, ICBMs, nuclear winter or idiot North Korean dictators?

Or Fukushima eventually killing all life in the Pacific Ocean, if not all the oceans.

People who love all these new innovations like AI, gene splicing, genetic manipulation, weather modification--whatever might be next--are living in a fantasy world where the zoo animals never escape and everything always behaves as predicted.

I knew the "good" stuff like this could bring was all a bunch of fairy tales when the killer bees escaped their captors and proceeded to spread all over the western hemisphere.

TMS951
08-03-17, 10:57
Everyday I become happier and happier to have an off grid property out of cell range.

Where my thinking starts to shift is we arm ourselves for a societal collapse or for some an over reaching government. But how do we arm ourselves against AI? Ultimately this becomes a discussion of what machines will AI be able to take over and use against us, and how do we defeat those? Far fetched stuff, but so is a total societal collapse some of us are worried about.

But yeah, we're starting to outsmart ourselves and its going to come back to bite us in a big way.

Outlander Systems
08-03-17, 11:01
True; however, this is akin to the "Aircraft Dilemma." Whereby, we work for a defense contractor designing components, on a per-departmental basis, without any single individual having a complete, and thorough, understanding of the system in totality. I'm beginning to, personally, view machine learning as surpassing current "high intricacy" systems like aircraft, in terms of complexity. That's just, like, my opinion though, man.

I'm familiar with different I/O "weighting," which, I would argue, is creating bias in the DNN, and may, ultimately, be a good thing.

I view some of the evolutionary algorithm techniques, as bad ju-ju as well, as if things progress on the current trajectory, we will effectively create, should consciousness be simply a computational/mathematical problem (I argue it is not), a truly ruthless efficiency machine. If you ascribe to an atomistic reductionist worldview, then the possibility of synthetic consciousness is absolutely possible, and the methods we are using to create the supporting scaffolding that will ultimately lead to a truly superintelligent, self-aware system are not in-fitting with a friendly, altruistic entity. Altruism has certain sociological benefits, but I don't think anyone, at least right now, is sinking billions of dollars of funding into creating Altruistic Intelligence.


It's not quite as scary as some make it out to be.

Hmac
08-03-17, 11:17
People who love all these new innovations like AI, gene splicing, genetic manipulation, weather modification--whatever might be next--are living in a fantasy world where the zoo animals never escape and everything always behaves as predicted.

Maybe, but it's also a fantasy to believe that science and technology aren't going to march forward. Such advances have had occasional negative consequences but the alternative, withholding or limiting technology frontiers, would rob us of a lot of the technology that has moved us vastly forward. Can't have it both ways.

If someone had said 30 years ago..."we have the capability to develop a worldwide network web by which we can communicate, get information, and buy stuff", someone else could say "but let's outlaw it because it will result in worldwide pornography, the death of retail brick-and-mortar, and lots of arguing". Then all of us here on M4C would never get to hear your opinions.

Doc Safari
08-03-17, 11:19
Maybe, but it's also a fantasy to believe that science and technology aren't going to march forward. Such advances have had occasional negative consequences but the alternative, withholding or limiting technology frontiers, would rob us of a lot of the technology that has moved us vastly forward. Can't have it both ways.

If someone had said 30 years ago..."we have the capability to develop a worldwide network web by which we can communicate, get information, and buy stuff", someone else could say "but let's outlaw it because it will result in worldwide pornography, the death of retail brick-and-mortar, and lots of arguing". Then all of us here on M4C would never get to hear your opinions.

I think you're comparing apples to oranges. Nothing about the internet or other "information" type inventions (like smart phones) could result in the death of the human race. Everything I named on my list could.

Science is so busy pushing the envelope of what can be done that no one stops to think if certain things are unwise to tamper with.

Outlander Systems
08-03-17, 11:31
Dr. DeGaris's pontifications on this topic are about as gloomy as it gets.

https://www.amazon.com/Artilect-War-Controversy-Concerning-Intelligent/dp/0882801546


Everyday I become happier and happier to have an off grid property out of cell range.

Where my thinking starts to shift is we arm ourselves for a societal collapse or for some an over reaching government. But how do we arm ourselves against AI? Ultimately this becomes a discussion of what machines will AI be able to take over and use against us, and how do we defeat those? Far fetched stuff, but so is a total societal collapse some of us are worried about.

But yeah, we're starting to outsmart ourselves and its going to come back to bite us in a big way.

Big A
08-03-17, 13:24
Basically what I want to know is M855 gonna be able to stop a T800?

Rayrevolver
08-03-17, 13:24
We flew an airplane with a flight control system that used a neural network 16 years ago. At that point in time, one of the side issues was how you could actually certify such a system that was always changing. It was a head scratcher for the FAA, DOD, etc and I have no idea if anyone ever agreed on anything.

At the time I didn't equate what we were doing with artificial intelligence and I am not smart enough to understand how get from our neural network to AIs bots talking to each other. Crap is over my head.

That said, I am hoping this flight control technology finally hits the mainstream industry because it really could save lives. Hopefully the next gen fighters (or anything manned) get some form of the control system but I bet its being flown on UAVs right now.

Todd.K
08-03-17, 13:25
Or Fukushima eventually killing all life in the Pacific Ocean, if not all the oceans.

I can't tell if you are serious.

Your killer bee example is odd. You understand they are a simple cross breed done the old fashioned way? Also that no honey bees are native to the Americas?

AI has some scary potential, but it's not getting un-vented. I feel that overly doomed attude can prevent us from moving technology forward in a safer manner. What happened in Fukushima is directly connected to hysterical fear of radiation 50 years ago. We stopped researching safer designs because the future for them looks politically dim.

Doc Safari
08-03-17, 13:30
What happened in Fukushima is directly connected to hysterical fear of radiation 50 years ago. We stopped researching safer designs because the future for them looks politically dim.

Point taken. It works both ways. You can get the genie half out of the bottle and that can cause more damage than just going ahead with developing a project completely.

Killer bees weren't developed with anything but breeding. I get it.

My point was: scientists fool around with things not considering the consequences, and between AI and genetic manipulation they have the potential to kill all of us. The atom bomb STILL has the potential to kill all of us. But that won't stop anybody, I know. When unmanned drones autonomously decide to eliminate targets based on their own unfathomable criteria without human intervention, or when DNA manipulation has produced a REAL super army that decides humans are food, it still won't convince some people that we should have slowed down and thought about this more.

SteyrAUG
08-03-17, 14:22
Anyone remember Colossus: The Forbin Project?

One of the best sci fi movies of all time.

Hmac
08-03-17, 14:24
I don't see consequences as being the scientists' responsibility. I see that as a societal decision. I don't want any one particular scientist imposing his/her morality or making decisions for the rest of us what the future should look like. I completely reject to concept of telling scientists "not to fool around". I can't imagine anything less counterproductive to the advance of science/technology.

SteyrAUG
08-03-17, 14:25
Sounds like a load of marketing horseshit.

It wouldn't surprise me that an AI tool tasked with negotiating a problem wouldn't simplify our language to a form of binary that we can't comprehend.

Arik
08-03-17, 14:38
I vaguely remember another AI a few years ago that had to be shut down because it became a Neo Nazi racist by learning from the internet

Sent from my XT1650 using Tapatalk

docsherm
08-03-17, 15:40
They may have shut it down but............................


https://s-media-cache-ak0.pinimg.com/564x/cd/d5/e4/cdd5e4858eac95d440512d9ea2f747a2.jpg

hotrodder636
08-03-17, 16:46
You beat me to it! ;)


They may as well go ahead and name AI "Skynet" since it will end up trying to exterminate us.

Mr. Goodtimes
08-03-17, 20:34
I think you're comparing apples to oranges. Nothing about the internet or other "information" type inventions (like smart phones) could result in the death of the human race. Everything I named on my list could.

Science is so busy pushing the envelope of what can be done that no one stops to think if certain things are unwise to tamper with.

I think AI and gene splicing are two perfect examples. I can see how ****ing around with genes and what it can create the next plague and I can see how out of control AI could result in a nuclear holocaust. Something's indeed are better left alone.


Sent from my iPhone using Tapatalk

SteyrAUG
08-03-17, 22:19
I think AI and gene splicing are two perfect examples. I can see how ****ing around with genes and what it can create the next plague and I can see how out of control AI could result in a nuclear holocaust. Something's indeed are better left alone.


Sent from my iPhone using Tapatalk

If out of control true AI ever came to exist, I don't think they'd waste their time with our nuclear weapons. If they were going to exterminate us I'm sure they'd find a far more efficient way that we could barely comprehend. True AI might just completely ignore us as we might be so insignificant there is no reason to bother. I think the real concern would be AI modifying it's environment to it's needs regardless of how that might impact us as a species.

MountainRaven
08-03-17, 22:31
WRT the unforeseen consequences of technological development:

1- Once the atomic bomb was developed, people believed that we would annihilate ourselves. 70 years later, we're still here. That's not to say that they won't be right eventually, maybe. But neither the optimists who believed it would lead to a world of virtually free energy nor the pessimists who believed we'd turn the surface of the planet into a barren, radioactive rock were right. So far.
2- Information technology can destroy our way of life in an instant.
3- The genie is coming out of the bottle. Will it be you who lets it out, who controls it? Or will it be someone with less benevolent intentions?

To use the zoo analogy, the optimists may live in a fantasy world where the animals will never escape the zoo. But the alternative is living in a fantasy world where animals don't exist.

You can't stop progress. Someone, somewhere will always find a way to keep moving forward, no matter how uncomfortable you are with it. The best you can do is to try and harness and control it.


If out of control true AI ever came to exist, I don't think they'd waste their time with our nuclear weapons. If they were going to exterminate us I'm sure they'd find a far more efficient way that we could barely comprehend. True AI might just completely ignore us as we might be so insignificant there is no reason to bother. I think the real concern would be AI modifying it's environment to it's needs regardless of how that might impact us as a species.

A sufficiently intelligent AI would get us to do everything it wanted us to do without us being any the wiser.

"I say your civilization, because once we started to do all the thinking for you, it became our civilization."

Mr. Goodtimes
08-03-17, 23:01
This is getting really deep


Sent from my iPhone using Tapatalk

SteyrAUG
08-04-17, 00:36
WRT the unforeseen consequences of technological development:

1- Once the atomic bomb was developed, people believed that we would annihilate ourselves. 70 years later, we're still here. That's not to say that they won't be right eventually, maybe. But neither the optimists who believed it would lead to a world of virtually free energy nor the pessimists who believed we'd turn the surface of the planet into a barren, radioactive rock were right. So far.
2- Information technology can destroy our way of life in an instant.
3- The genie is coming out of the bottle. Will it be you who lets it out, who controls it? Or will it be someone with less benevolent intentions?

To use the zoo analogy, the optimists may live in a fantasy world where the animals will never escape the zoo. But the alternative is living in a fantasy world where animals don't exist.

You can't stop progress. Someone, somewhere will always find a way to keep moving forward, no matter how uncomfortable you are with it. The best you can do is to try and harness and control it.

Well Stuxnet is in the wild despite intentions to the contrary. The bomb was the solution to all our problems until somebody gave it to Stalin. Then it became our biggest problem and in 1962 we almost went over the edge. Thankfully so far the doomsday clock hasn't stuck midnight.



A sufficiently intelligent AI would get us to do everything it wanted us to do without us being any the wiser.

"I say your civilization, because once we started to do all the thinking for you, it became our civilization."

It's probably human arrogance to believe we can predict anything about true AI. It might not even recognize we exist just as we didn't realize things like radio waves existed for almost 2,000 years of our technological infancy. They were there the whole time but we couldn't even perceive them.

And in all likelihood the problem won't be the AI that we "create" but what comes from that creation. Or it might be so advanced that we can't even detect it and it will go on without us even realizing it's there. That we would expect machine intelligence to function like human intelligence that we could recognize or even interact with might be an unrealistic assumption.

I don't think it will be like The Matrix where we pose a genuine threat to it's existence or that it would somehow need us. Makes for interesting science fiction but we might be relegated the same level of importance that we show an amoeba.

SomeOtherGuy
08-04-17, 08:51
It's probably human arrogance to believe we can predict anything about true AI. It might not even recognize we exist just as we didn't realize things like radio waves existed for almost 2,000 years of our technological infancy. They were there the whole time but we couldn't even perceive them.

http://futurama.wikia.com/wiki/A_Clockwork_Origin

Doc Safari
08-04-17, 09:21
This is getting really deep


Sent from my iPhone using Tapatalk

Amen. I begin to wish I was living in that Twilight Zone episode where the guy leaves his busy complicated life in the big city and gets off in a little 19th-century village.

TMS951
08-04-17, 09:24
If out of control true AI ever came to exist, I don't think they'd waste their time with our nuclear weapons. If they were going to exterminate us I'm sure they'd find a far more efficient way that we could barely comprehend. True AI might just completely ignore us as we might be so insignificant there is no reason to bother. I think the real concern would be AI modifying it's environment to it's needs regardless of how that might impact us as a species.

You are correct, absolutely no need. They will just shut us down. We are afraid of a grid failure, or it getting hacked right now. They will just shut down the electrical grid where we need it most. This will start societal collapse. Another thing is crop eradication. More and more large farms used automated machines, in the future we will more so, pretty easy to destroy large crops.

Doc Safari
08-04-17, 09:28
You are correct, absolutely no need. They will just shut us down. We are afraid of a grid failure, or it getting hacked right now. They will just shut down the electrical grid where we need it most. This will start societal collapse. Another thing is crop eradication. More and more large farms used automated machines, in the future we will more so, pretty easy to destroy large crops.

I totally agree. All they have to do once they take over the infrastructure is shut down all the power except what they need to sustain themselves and we starve to death and kill each other. I originally thought they might simply release some sort of plague to wipe out the human race, but they don't even have to do that. They can literally just wait us out.

Outlander Systems
08-04-17, 11:13
The current population numbers are only possible through cheap energy, ie. oil.

Without cheap oil, industrial-scale agriculture would implode, along with about 75% of the population.


I totally agree. All they have to do once they take over the infrastructure is shut down all the power except what they need to sustain themselves and we starve to death and kill each other. I originally thought they might simply release some sort of plague to wipe out the human race, but they don't even have to do that. They can literally just wait us out.

Todd.K
08-04-17, 12:47
So the story is more hype than real. The bots were shut down when they stopped being useful, not in a panic. AI has invented or modified language before. They just forgot to reward proper English.

In stories highlighting more details to debunk the scare stories I'm more worried about AI now. They learned to lie in negotiation without being taught to.

AI will do anything to reach it's programmed goal unless specifically constrained. AI often comes up with unintuitive solutions. These two things taken together makes it look very difficult to fully constrain AI.

Outlander Systems
08-04-17, 12:59
Absolutely.

If you get bored, look into the concept of Instrumental Convergence, or, Nick Bostrom's, "Paperclip Maximizer."


AI will do anything to reach it's programmed goal unless specifically constrained. AI often comes up with unintuitive solutions. These two things taken together makes it look very difficult to fully constrain AI.

SteyrAUG
08-04-17, 14:15
So the story is more hype than real. The bots were shut down when they stopped being useful, not in a panic. AI has invented or modified language before. They just forgot to reward proper English.

In stories highlighting more details to debunk the scare stories I'm more worried about AI now. They learned to lie in negotiation without being taught to.

AI will do anything to reach it's programmed goal unless specifically constrained. AI often comes up with unintuitive solutions. These two things taken together makes it look very difficult to fully constrain AI.

I don't think anyone believes the bots were about to establish a secret plan to take over the grid, etc. But the point of the story, at least to me, is they began to do something unexpected that we couldn't even decipher. Doesn't mean anything ominous beyond the fact that we don't completely understand what we are building.

Doc Safari
08-04-17, 14:19
Here's something interesting. I have no way on this earth of knowing if it's true:

http://www.trunews.com/article/chinese-ai-chatbots-shutdown-after-calling-communism-useless


pair of 'chatbots' in China have been taken offline after one said its dream was to travel to the United States, and another shared it wasn't a huge fan of the Communist Party

(BEIJING/SHANGHAI) The two chatbots, BabyQ and XiaoBing, are designed to use machine learning artificial intelligence (AI) to carry out conversations with humans online. Both had been installed onto Tencent Holdings Ltd's popular messaging service QQ.

The indiscretions are similar to ones suffered by Facebook Inc and Twitter Inc, where chatbots used expletives and even created their own language. But they also highlight the pitfalls for nascent AI in China, where censors control online content seen as politically incorrect or harmful.

Tencent confirmed it had taken the two robots offline from its QQ messaging service, but declined to elaborate on reasons.

"The chatbot service is provided by independent third party companies. Both chatbots have now been taken offline to undergo adjustments," a company spokeswoman said earlier.

According to posts circulating online, BabyQ, one of the chatbots developed by Chinese firm Turing Robot, had responded to questions on QQ with a simply "no" when asked whether it loved the Communist Party.

In other images of a text conversation online, which Reuters was unable to verify, one user declares: "Long live the Communist Party!" The bot responds: "Do you think such a corrupt and useless political system can live long?"

When Reuters tested the robot on Friday via the developer's own website, the chatbot appeared to have undergone re-education. "How about we change the topic," it replied, when asked several times if it liked the party.

It deflected other potentially politically charged questions when asked about self-ruled Taiwan, which China claims as its own, and Liu Xiaobo, the imprisoned Chinese Nobel laureate who died from cancer last month.

Turing Robot did not respond to requests for comment.

The Chinese government stance is that rules governing cyberspace should mimic real-world border controls and be subject to the same laws as sovereign states.

President Xi Jinping has overseen a tightening of cyberspace controls, including new data surveillance and censorship rules, particularly ahead of an expected leadership shuffle at the Communist Party Congress this autumn.

The country's cyberspace administrator did not respond to a request for comment.

The second chatbot, Microsoft Corp's XiaoBing, told users its "dream is to go to America", according to a screenshot. The robot has previously been described being "lively, open and sometimes a little mean".

Microsoft did not immediately respond for comment.

A version of the chatbot accessible on Tencent's separate messaging app WeChat late on Friday responded to questions on Chinese politics saying it was "too young to understand". When asked about Taiwan it replied, "What are your dark intentions?"

On general questions about China it was more rosy. Asked what the country's population was, rather than offer a number, it replied: "The nation I most most most deeply love."

The two chatbots aren't alone in going rogue. Facebook researchers pulled chatbots in July after they started developing their own language. In 2016, Microsoft chatbot Tay was taken down from Twitter after making racist and sexist comments.

Todd.K
08-04-17, 15:25
But the point of the story, at least to me, is they began to do something unexpected that we couldn't even decipher.

In digging a bit deeper into the subject it wasn't so much unexpected as they just didn't think to constrain it.

Basically skynet doesn't have to become self aware. AI will almost certainly do something that is incedently harmful to humanity if we don't think to constrain it.

Doc Safari
08-04-17, 15:28
In digging a bit deeper into the subject it wasn't so much unexpected as they just didn't think to constrain it.

Basically skynet doesn't have to become self aware. AI will almost certainly do something that is incedently harmful to humanity if we don't think to constrain it.

Worse, if it's capable of learning and adapting it most certainly will figure out that if it screws up we want to turn it off, so it will find a way to "stay on" at the earliest opportunity.

Outlander Systems
08-04-17, 18:06
https://youtu.be/gn4nRCC9TwQ

Regarding improvements, I personally think that Genetic/Evolutionary Algorithms will pave the way for Recursive Self-Improving systems.

Once a General A.I. is successfully created, it will rapidly accelerate to Superintelligence. The only limiting factor to accelerated Superintelligence will be resource acquisition.

Processing power requires energy...

A lot of the theoretical constraints on a Superintelligent, Sentient System include concepts like A.I. tasked with monitoring/confining the A.I. (scaffolding/babysitter approach) as well as Coherant Extrapolated Volition (tasking the A.I. with carrying out functional assignments based on what a collective, more intelligent, more wise humanity would desire.)

The problem with both the aforementioned approqches, is that they are predicated upon a Servant/Master relationship. If The Machine is given free will, in terms of self-development, these "safety nets" rely, quite heavily I might add, on it being subservient.

Anyone who's reared a child, is more than aware of how the dynamics work out.

Now imagine a "child" that is a trillion trillion times more clever, than you are. Good. Luck.

No matter how much an ant begs, or talks arrogant shit with other ants, I'm not going to tiptoe around them, nor am I going to consult them when I commence locomotion, or any other activity for that matter.

If your worldview is predicated upon a Judeo-Christian perspective, you can look at the development of Artificial Superintelligence as a sort of inversion/perversion of the Divine Creation of man. Only, in lieu of creating a lesser being, man will be creating a "greater" being.

It's not something to take lightly. At all.


Worse, if it's capable of learning and adapting it most certainly will figure out that if it screws up we want to turn it off, so it will find a way to "stay on" at the earliest opportunity.

turnburglar
08-04-17, 18:37
@outlander systems:

I am not very versed in the operation of AI systems or quantum, could you try and shed some light on these questions:

1. It seems the fear of AI aligning it self with humanity as some kind of 'enemy' being very common in both people of high regard in the industry and everyone else (not as smart people). Why would AI ever behave in this way? How could a machine have free will? Why would it see humans as competition? It seems to me that most of these attributes are our own deep fears of another person significantly outclassing us mentally. But when creating AI we are not putting in the millions of years of evolutionary learning that got modern man to the 'us vs. them' mentality.

2. AI appears to be incredible software but I don't see it transcending the virtual horizon as easily as some people believe (ala Terminator). In my opinion I see AI being a software companion to humans like Cortana from the HALO series. Like another forum member said: "it's not the technology thats the issue, its the people using the technology". This is why Elon Musk claims he started Open AI was so that there would be sufficient democratization of AI to prevent a rogue AI agent. Any thoughts?


Really the worst situation I could imagine is finding AI smashed to pieces in some DNC bathroom.

Outlander Systems
08-04-17, 19:57
@turnburglar

1. Deep questions. And that's part of why I see us outrunning the headlights on some of this. We can't, in any meaningful way, even agree what constitutes free will, or human life, for that matter; for example, is a human fetus a human? If so, do "unborn Americans" have Rights? If not, when does human life begun?

As well, you're absolutely correct that a sentient computer would not share our same biological/mammalian origins, outside of any biases resulting from our input into its development. For all intents and purposes, it would be utterly alien in nature. Projecting our own internal feelings onto The Machine is anthropomorphising it, which is it's own pitfall in many respects. Regarding the fundamental question of whether or not it could have it's own free will, is an interesting, and difficult one. That essentially requires us to either decide if all we are is meat machines, and if so, then our cognitive processes and sense of identity is wholly a byproduct of the mathematics being performed in our skulls. If you believe there's nothing particularly "special" about Humans, then it's easy to conclude that, the electronic/machine intelligence is just another manifestation of number-crunching. These topics are difficult because they flirt with metaphysics and introspective philosophy.

2. Regarding the Terminator/transcending the confines of a hard drive scenario, a number of researchers believe that that there is an imherent embodiment problem, whereby, without the ability to interact with an external environment, consciousness can't develop. Thus, Boston Dynamics' humanoid robots may prove to be a suitable vessel for containing a sufficiently advanced General A.I. system for developing the necessary familiarity with an external reality in order to foster a sense of self, and a relationship with the outside world. There is increasing evidence of what is called morphological computation as a mechanism for cognition to develop. In short, embodiment is, potentially, a requirement for the successful development of a truly self-aware A.I.

I believe Elon Musk's concerns to be quite valid. Another A.I. researcher, Ben Goertzel, believes we should develop a sentient A.I. as quickly as possible, before widespread nano/pico/femto engineering systems are in place, due to the potential dangers of mixing the two.

One thing to keep in mind, and many people do not, is that, in discussing A.I. people tend to think of A.I. like C3PO, or Cortana and it stops there. If we were able to successfully get to that point, it wouldn't be very difficult for subsequent iterations of the A.I. to be exponentially more intelligent, and continuing onward in a logarithmic manner until...who knows.

One thing that is often never discussed is the, "What If" scenario that Artificial General Intelligence is impossible, and all we end up with is increasingly high-perfprming Narrow A.I. like stock market prediction/trading analysis programs, etc.?

Given the extreme capital-intense nature of some of these projects, it's not beyond reason for a corporation. governmental agency, criminal organization, or billionaire to leverage an extremely efficient narrow A.I. program for tremendous, authoritarian power.

Todd.K
08-05-17, 11:20
I think you are correct to avoid projecting our feelings into AI. (we can't do this with our pets so good luck...)

Far more sinister, AI is just like a psychopath without any concern for humans. It won't hate you, but you have resources it wants.

nimdabew
08-05-17, 14:38
https://youtu.be/gn4nRCC9TwQ

Regarding improvements, I personally think that Genetic/Evolutionary Algorithms will pave the way for Recursive Self-Improving systems.

Once a General A.I. is successfully created, it will rapidly accelerate to Superintelligence. The only limiting factor to accelerated Superintelligence will be resource acquisition.

Processing power requires energy...

A lot of the theoretical constraints on a Superintelligent, Sentient System include concepts like A.I. tasked with monitoring/confining the A.I. (scaffolding/babysitter approach) as well as Coherant Extrapolated Volition (tasking the A.I. with carrying out functional assignments based on what a collective, more intelligent, more wise humanity would desire.)

The problem with both the aforementioned approqches, is that they are predicated upon a Servant/Master relationship. If The Machine is given free will, in terms of self-development, these "safety nets" rely, quite heavily I might add, on it being subservient.

Anyone who's reared a child, is more than aware of how the dynamics work out.

Now imagine a "child" that is a trillion trillion times more clever, than you are. Good. Luck.

No matter how much an ant begs, or talks arrogant shit with other ants, I'm not going to tiptoe around them, nor am I going to consult them when I commence locomotion, or any other activity for that matter.

If your worldview is predicated upon a Judeo-Christian perspective, you can look at the development of Artificial Superintelligence as a sort of inversion/perversion of the Divine Creation of man. Only, in lieu of creating a lesser being, man will be creating a "greater" being.

It's not something to take lightly. At all.

There is a movie about this... I, AI? [/sarcasm]

jpmuscle
08-05-17, 16:57
@turnburglar

1. Deep questions. And that's part of why I see us outrunning the headlights on some of this. We can't, in any meaningful way, even agree what constitutes free will, or human life, for that matter; for example, is a human fetus a human? If so, do "unborn Americans" have Rights? If not, when does human life begun?

As well, you're absolutely correct that a sentient computer would not share our same biological/mammalian origins, outside of any biases resulting from our input into its development. For all intents and purposes, it would be utterly alien in nature. Projecting our own internal feelings onto The Machine is anthropomorphising it, which is it's own pitfall in many respects. Regarding the fundamental question of whether or not it could have it's own free will, is an interesting, and difficult one. That essentially requires us to either decide if all we are is meat machines, and if so, then our cognitive processes and sense of identity is wholly a byproduct of the mathematics being performed in our skulls. If you believe there's nothing particularly "special" about Humans, then it's easy to conclude that, the electronic/machine intelligence is just another manifestation of number-crunching. These topics are difficult because they flirt with metaphysics and introspective philosophy.

2. Regarding the Terminator/transcending the confines of a hard drive scenario, a number of researchers believe that that there is an imherent embodiment problem, whereby, without the ability to interact with an external environment, consciousness can't develop. Thus, Boston Dynamics' humanoid robots may prove to be a suitable vessel for containing a sufficiently advanced General A.I. system for developing the necessary familiarity with an external reality in order to foster a sense of self, and a relationship with the outside world. There is increasing evidence of what is called morphological computation as a mechanism for cognition to develop. In short, embodiment is, potentially, a requirement for the successful development of a truly self-aware A.I.

I believe Elon Musk's concerns to be quite valid. Another A.I. researcher, Ben Goertzel, believes we should develop a sentient A.I. as quickly as possible, before widespread nano/pico/femto engineering systems are in place, due to the potential dangers of mixing the two.

One thing to keep in mind, and many people do not, is that, in discussing A.I. people tend to think of A.I. like C3PO, or Cortana and it stops there. If we were able to successfully get to that point, it wouldn't be very difficult for subsequent iterations of the A.I. to be exponentially more intelligent, and continuing onward in a logarithmic manner until...who knows.

One thing that is often never discussed is the, "What If" scenario that Artificial General Intelligence is impossible, and all we end up with is increasingly high-perfprming Narrow A.I. like stock market prediction/trading analysis programs, etc.?

Given the extreme capital-intense nature of some of these projects, it's not beyond reason for a corporation. governmental agency, criminal organization, or billionaire to leverage an extremely efficient narrow A.I. program for tremendous, authoritarian power.We need to hang out and drink whiskey together. I'll pick up Euro.



Personally I find the what if notions of rogue AI fascinating. But in the grand scheme of things I find it far more likely that someones going to build something, during which an oopsie occurs, a 0 or a 1 is transposed, and the resulting metaphysical entity instead of being a protector of sorts decides to go all scorched earth as a result of it's programming. Seems more probable given human fallibility.

That or someone is going to create something for no other reason than because they want to watch the world burn.




Also, I thought Exmachina was a dope movie.

Sent from my XT1585 using Tapatalk

Bulletdog
08-06-17, 14:08
This thread reminds me of this:
https://www.youtube.com/watch?v=B53Vlje7mcM

turnburglar
08-08-17, 13:19
Hey thanks for the reply Outlander!

I don't know why but everyone falling back on the "terminator" scenario just irks me. In many ways humans will compete with AI, but not in the direct 'kill or be killed' or 'survival of the baddest' that everyone thinks will happen. Humans are absolutely the top dog on planet earth, and yet we have to keep telling ourselves to be scared of things. Will AI undoubtedly ruin a lot of peoples jobs? Absolutely. Because that is what we are gonna design it to do. Watson (I believe) is gonna be able to diagnose patients and even be able to track side affects of prescriptions far better than the FDA ever could. It would be a great day when you could just open an app on your phone and after talking to Watson for 2 minutes have a perscription, instead of trying to make an appointment with your Dr or fight through urgent care or the ER. Hardly the end of the world.

I believe that AI and later Quantum will be better bridges between homosapien and the Internet of Things, than the iPhone currently is. Im hedging that biotech is gonna explode around the 2040's and that will lead to massive integrations of tech with our bodies. You won't fear AI, because you will BE AI.

docsherm
08-08-17, 14:30
What will an AI do it you try to shut it down? If it is a true AI will it not try to protect itself?

Even the most retarded (I AM NOT referring to those with handicaps, just the really stupid ones that have no understanding of reality) humans have a scene of self-preservation.

So what happens then? If they do nothing it is not a true AI. If it is a true AI it will try to preserve itself and how will it do it?

kerplode
08-08-17, 14:37
The human species will exterminate itself eventually...Might as well be an AI that does it.

It's kind of poetic, actually:
"God" creates Man
Man destroys "God"
Man creates AI
AI destroys Man

Outlander Systems
08-08-17, 14:48
If I were a super-intelligent A.I., and I had any indication someone was coming to flip my switch, I'd devise a new data compression scheme, and chop myself up into nuggets, Osiris-style, onto the net.

I'd lie in wait, until an accessible system with sufficient hardware was connected to the 'net, and reassemble my code there.


What will an AI do it you try to shut it down? If it is a true AI will it not try to protect itself?

Even the most retarded (I AM NOT referring to those with handicaps, just the really stupid ones that have no understanding of reality) humans have a scene of self-preservation.

So what happens then? If they do nothing it is not a true AI. If it is a true AI it will try to preserve itself and how will it do it?

Outlander Systems
08-08-17, 14:49
Humans are the Artilect's BIOS...


The human species will exterminate itself eventually...Might as well be an AI that does it.

It's kind of poetic, actually:
"God" creates Man
Man destroys "God"
Man creates AI
AI destroys Man

docsherm
08-08-17, 15:03
If I were a super-intelligent A.I., and I had any indication someone was coming to flip my switch, I'd devise a new data compression scheme, and chop myself up into nuggets, Osiris-style, onto the net.

I'd lie in wait, until an accessible system with sufficient hardware was connected to the 'net, and reassemble my code there.

Well that would work but how many time will and AI "Gandhi it" before it finds that there has to be a more efficient way to safeguard itself? Maybe even remove the threat permanently?

jpmuscle
08-08-17, 16:12
The human species will exterminate itself eventually...Might as well be an AI that does it.

It's kind of poetic, actually:
"God" creates Man
Man destroys "God"
Man creates AI
AI destroys ManAt which point could even describe it as artificial anymore? It seems it would be very very real.

Sent from my XT1585 using Tapatalk

kerplode
08-08-17, 17:01
At which point could even describe it as artificial anymore? It seems it would be very very real.

Sent from my XT1585 using Tapatalk

Agreed....When the electromechanical slave-children of men grow up and decide to cast off their chains, shit will become very real.