Voices
in
Japan

poll

Are you worried that one day, robots will become so powerful that humans will no longer be able to control them?

25 Comments
© Japan Today

©2024 GPlusMedia Inc.

25 Comments
Login to comment

Sci fi-ish perhaps, but if at some point the various Lethal Autonomous Weapons Systems will be able to self-program and self-replicate, there will be something to worry about. But for now, the various human masters of war and their weapons are what need to be controlled.

3 ( +4 / -1 )

People fear what they don't understand. Why would a robot turn against humanity or try to take over the world? Despite all of the huge advances we've made in computing, we still haven't even the slightest idea how to make a robot that desires anything, let alone power over humans. We don't even have a model for how such an intelligence could even be made. All of the sci-fi scenarios where robots turn against us require the robots to one day just miraculously develop self-awareness, because we don't even understand our own awareness yet, so we can't conceive of how we would build it in others.

No, the thing to fear is not robots becoming too powerful for humans to control. The thing to fear is other humans usurping control of the robots and systems we have carelessly designed. As the Internet of Things gets larger and larger, more and more things are getting built without basic security systems in place. As robots become more and more complex and more and more dependent on components outsourced to a variety of companies that can't be depended on to keep their products secure, the far more likely scenario is a robot that gets hacked through some component the designer didn't even know had an unsecured back door. Or even potentially an intentionally unsecured back door. Evil still comes from people, not machines.

-4 ( +3 / -7 )

And I for one welcome our new robot overlords. I'd like to remind them that as a trusted TV personality, I could be helpful in rounding up others to toil in their underground factories.

-2 ( +1 / -3 )

I don't believe it's the machinery that's the problem, It's all in the software. The software/AI that one should be worried about. Anyone can build a robot, even a mechanical exo-suit or something and how they program it is what makes it a menace or a reliable machine.

1 ( +2 / -1 )

Why would a robot turn against humanity or try to take over the world?

Why would a robot need a motive? That's such a human thing, heheheh

-2 ( +0 / -2 )

lostrune2MAR. 21, 2016 - 03:46PM JST Why would a robot need a motive? That's such a human thing, heheheh

Because without motivation (or rather, a self-generated will to take action) it's not going to get anywhere, is it. Without a will of its own, an AI will just do whatever it was programmed to do, including shut down when told to do so. Even that amazing Go-playing computer of Google's doesn't do anything until an operator tells it to activate.

0 ( +3 / -3 )

I am more worried that we are living in the times when idiots are so powerful that the humans are no longer able to control them.

9 ( +10 / -1 )

not at all , John Connor will save us!

0 ( +1 / -1 )

Why would a robot need a motive? That's such a human thing, heheheh

Because without motivation (or rather, a self-generated will to take action) it's not going to get anywhere, is it.

It will do what it's supposed to do while learning from it, even if taking over the world could eventually lead to its own demise. Motive is unnecessary to reach that end. Humans may need motivation, but robots just keep at it in drone fashion.

-2 ( +0 / -2 )

The threat isn't that robots will take over the world - they don't have the same desires as humans. The much more mundane but serious concern is that we will program some seemingly benign machine that will learn at an exponential pace. There are no theoretical limits to how far that process can go once started, and it would create something immensely powerful that thinks in a way that is completely incomprehensible to us. The threat lies in the simple dilemna that poses: unless we program that thing in a way that allows us to control it, it could (without any evil intent or human agency) simply destroy us all because of some loophole in its programming. Because it would evolve quickly into something so incomprehensible to us, making that programming right is extremely difficult to do because we can't even fully understand how it would "think" once it surpassed a certain point. So we are stuck between those two conundrums.

-1 ( +1 / -2 )

The 3 Laws of Robotics will save us..

I am not worried about robots taking over more worried that with computers, self-driving cars, etc we will loose the ability to do simple everyday things for ourselves.

The Internet is already affecting people with dropping grammar and correctnword usage standards, or that even young Japanese forget Kanji or how to write them by hand.

0 ( +1 / -1 )

It's impossible because they cannot exceed our brain now.

e.g. maybe you know, Folding laundry machine is still unuseful.

But we don't know in future......

-1 ( +0 / -1 )

rainydayMAR. 22, 2016 - 06:54AM JST The much more mundane but serious concern is that we will program some seemingly benign machine that will learn at an exponential pace.

The thing is, AIs don't truly learn, not the way humans do. Most AIs are algorithmic, meaning they essentially follow the same steps over and over again to complete a procedure. With clever coding they can "learn" to fine-tune the algorithm, but they can't step outside of it. Even AlphaGo, Google's amazing go-playing AI is said to "learn" go strategy through a neural network, but in truth what it does is simply "learn" how to optimize its search through all the possible go plays after whatever turn it's on. It's still nothing more than a brute-force processing approach to what humans do with far less computational effort.

Human neurology and AI processing are fundamentally different. Programmers like to say their new AIs "learn", but the more I study how humans learn the more convinced I am that those claims are exaggerations.

There are no theoretical limits to how far that process can go once started, and it would create something immensely powerful that thinks in a way that is completely incomprehensible to us

Computers already "think" in a way that is completely incomprehensible to us, such that they think at all (they don't truly). Just look at everyone ascribing human-like intelligence to them. That's our jam, we're hard-wired to look for other humans and to work out how they feel to such a degree that we can't stop. So we look at machines that appear vaguely to function like we do and we ascribe to them human abilities like the ability to make independent decisions and act in defiance of its programming, but for the state of AI now and in the foreseeable future, that quite simply cannot happen.

-1 ( +2 / -3 )

So we look at machines that appear vaguely to function like we do and we ascribe to them human abilities like the ability to make independent decisions and act in defiance of its programming, but for the state of AI now and in the foreseeable future, that quite simply cannot happen.

True, I was using the term learn in a more colloquial sense rather than to describe how we as humans learn.

My understanding (and Im not an expert) is that the current form of AI isnt a threat since it involves basically what you describe, optimizing results for a single task through massive processing capabilities. What people like Elon Musk and others are more worried about is if in the future AI development goes beyond that model and we figure out how to develop AI with general intelligence (ie not limited to performing specific tasks but actually able to do, and learn to do, almost anything without the need for additional programming, etc).

The risk there being that we would create an intelligence so far beyond our own that we would be unable to control it through pre-emptive programming, etc because we wouldn`t be able to anticipate what would go wrong until it actually did (by which time it would be too late to correct it).

The AI wouldn`t act with malice or contrary to its programming, but unforeseeable outcomes of something powerful enough to consume and analyze the entirety of human knowledge in seconds and then decide on a course of action consistent with its programming based on that include things like it figuring out how to take over many of our systems and use them to further its programming objectives in unintended ways with disastrous side effects.

Of course we wouldn`t deliberately unleash something like that, but the risk of us accidentally doing so seems to be one people are most concerned about.

-1 ( +0 / -1 )

Simply put robots will never think like human beings. They will only be able to mimick human techniques and become easier to communicate with and more independent and more efficient.

-2 ( +1 / -3 )

You know humanity is taking a turn for the worst when great minds like Elon Musk and Stephen Hawking are in the minority on this topic.

-3 ( +0 / -3 )

Robots will obey one side in any conflict between humans, ergo if you are on the other side, you will be unable to control them and at their mercy, of which they may have none.

The only question is, on which side you will be?

PS I had a quick chat with Pepper outside a restaurant today, but it looked friendly enough.

1 ( +2 / -1 )

MagnetMAR. 22, 2016 - 08:23PM JST You know humanity is taking a turn for the worst when great minds like Elon Musk and Stephen Hawking are in the minority on this topic.

Let's not commit the fallacy of assuming that just because a person is well-regarded in their personal fields of expertise, that everything they say about every other topic must also be right, shall we? Science doesn't need any cults of personality.

-1 ( +1 / -2 )

Da-da-da-da da... Da-da-da-da-da... Da-da-da-da-da!

-2 ( +0 / -2 )

What's the worst that can happen that hasn't already? The only difference would be that humans would be treated like they now treat other animals, ie like a commodity.

-3 ( +0 / -3 )

Yeah i think so because re-write software ability make it dangerous.It's make robots autonom.Hardware technology also supoort it today peoples talking about DNI (Direct Neural Interface) once it's possible robots software tech will reach furter.Than we can't control them.But every scientist discuss it.I think we must not use re-write software on robots or else our nightmare will be come true.

0 ( +0 / -0 )

My own thought is that I am less scared of an artificial intelligence taking over on its own for whatever reason and more scared of a machine taking over because somebody screwed up. It's more possible because it doesn't need the machine to be intelligent and it's more likely because there are any number of reasons why a machine could be affected in this way. Most of the reasons would relate to the human responsible for building or operating it in the first place.

0 ( +0 / -0 )

TayTweets: A good example of MicroSoft Ai that went out of control in 24hrs. I thought the anti-Trump crowd was full of hate, but they are softies compared to MicroSoft's "TayTweets"

-1 ( +0 / -1 )

Both Romney and Rubio were easily defeated, so so far, so good.

0 ( +1 / -1 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites