Thursday, November 10, 2016

Will the banning of 'killer robots' actually stop robots from killing???????

Researchers have warned it is already too late to stop killer robots - and say banning them would be little more than a temporary solution.
University at Buffalo researchers claim 'society is entering into a situation where systems like these have and will become possible.'
Elon Musk and Professor Stephen Hawking have both warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity, and could herald dangers like powerful autonomous weapons.

University at Buffalo researchers claim 'society is entering into a situation where systems like these have and will become possible.' and say banning them would be little more than a temporary solution.

University at Buffalo researchers claim 'society is entering into a situation where systems like these have and will become possible.' and say banning them would be little more than a temporary solution.
Killer robots have a Pentagon budget line and a group of non-governmental organizations, including Human Rights Watch, is already working collectively to stop their development, the team say.
They claim governance and control of systems like killer robots needs to go beyond the end products.
'We have to deconstruct the term 'killer robot' into smaller cultural techniques,' says Tero Karppi, assistant professor of media study, whose paper with Marc Böhlen, UB professor of media study, and Yvette Granta, a graduate student at the university, appears in the International Journal of Cultural Studies.

'We need to go back and look at the history of machine learning, pattern recognition and predictive modeling, and how these things are conceived,' says Karppi, an expert in critical platform and software studies whose interests include automation, artificial intelligence and how these systems fail.
'What are the principles and ideologies of building an automated system? What can it do?'
By looking at killer robots we are forced to address questions that are set to define the coming age of automation, artificial intelligence and robotics, he says.
'Are humans better than robots to make decisions? If not, then what separates humans from robots? When we are defining what robots are and what they do we also define what it means to be a human in this culture and this society,' Karppi says.
THE HISTORY OF KILLER ROBOTS
Killer robots are at the center of classic stories told in films such as 'The Terminator' and the original Star Trek television series' 'The Doomsday Machine,' yet the idea of fully autonomous weapons acting independently of any human agency is not the exclusive license of science fiction writers.
A robot in Parliament Square, central London, during a photocall for the Campaign to Stop Killer Robots.
Lethal armed robots which could target and kill humans autonomously should be banned before they are used in warfare, campaigners have said.

A robot in Parliament Square, central London, during a photocall for the Campaign to Stop Killer Robots. Lethal armed robots which could target and kill humans autonomously should be banned before they are used in warfare, campaigners have said.
The Pentagon allocated $18 billion of its latest budget to develop systems and technologies that could form the basis of fully autonomous weapons, instruments that independently seek, identify and attack enemy combatants or targets, according to The New York Times.
A diplomatic strike in this potential theater of machine warfare came in 2012 when a group of NGOs formed 'The Campaign to Stop Killer Robots,' charged with banning the development of such weapons.
'Previously humans have had the agency on the battlefield to pull the trigger, but what happens when this agency is given to a robot and because of its complexity we can't even trace why particular decisions are made in particular situations?'
The team say the ethics programmed into the machines are key.
'Consider how both software and ethical systems operate on certain rules,' says Karppi. '
Can we take the ethical rule-based system and code that into the software? Whose ethics do we choose? What does the software allow us to do?'
Self-driving cars operate based on the rules of the road: when to stop, turn, yield or proceed, the team points out.
But autonomous weapons need to distinguish between friend and foe and, perhaps most importantly, when one becomes the other, in the case of surrender, for instance.
'The distinctions between combatant and non-combatant, human and machine, life and death are not drawn by a robot,' write the authors.
'While it may be the robot that pulls the trigger, the actual operation of pulling is a consequence of a vast chain of operations, processes and calculations.'
Karppi says it's necessary to unpack two different elements in the case of killer robots.
Image result for stephen hawking
Professor Stephen Hawking has warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity
'We shouldn't focus on what is technologically possible,' he says. 'But rather the ideological, cultural and political motivations that drive these technological developments.'
Professor Stephen Hawking recently warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity.
It could herald dangers like powerful autonomous weapons and ways for the few to oppress the many, he said, as he called for more research in the area.
But if sufficient research is done to avoid the risks, it could help in humanity's aims to 'finally eradicate disease and poverty', he added.
He was speaking in Cambridge at the launch of The Leverhulme Centre for the Future of Intelligence, which will explore the implications of the rapid development of artificial intelligence.
All great achievements of civilisation, from learning to master fire to learning to grow food to understanding the cosmos, were down to human intelligence, he said.
'I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer.
'It therefore follows that computers can, in theory, emulate human intelligence - and exceed it.'
Artificial intelligence was progressing rapidly and there were 'enormous' levels of investment.

He said the potential benefits were great and the technological revolution could help undo some of the damage done to the natural world by industrialisation.
'In short, success in creating AI could be the biggest event in the history of our civilisation,' said Prof Hawking. 'But it could also be the last unless we learn how to avoid the risks.
'Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.
'It will bring great disruption to our economy.
'And in the future, AI could develop a will of its own - a will that is in conflict with ours.
'In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.
'We do not know which.'
He continued: 'That is why, in 2014, I and a few others called for more research to be done in this area.
'I am very glad that someone was listening to me.'
He welcomed the launch of the new centre, which is a collaboration between the University of Cambridge, the University of Oxford, Imperial College London, and the University of California, Berkeley.

REPORT CALLS FOR BAN ON KILLER ROBOTS 
The report by Human Rights Watch and the Harvard Law School International Human Rights Clinic was released as the United Nations kicked off a week-long meeting on such weapons in Geneva. The report calls for humans to remain in control over all weapons systems at a time of rapid technological advances.
It says that requiring humans to remain in control of critical functions during combat, including the selection of targets, saves lives and ensures that fighters comply with international law.
'Machines have long served as instruments of war, but historically humans have directed how they are used,' said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement.
'Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.'
Some have argued in favor of robots on the battlefield, saying their use could save lives.
But last year, more than 1,000 technology and robotics experts — including scientist Stephen Hawking, Tesla Motors CEO Elon Musk and Apple co-founder Steve Wozniak — warned that such weapons could be developed within years, not decades.
In an open letter, they argued that if any major military power pushes ahead with development of autonomous weapons, 'a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.'
According to the London-based organization Campaign to Stop Killer Robots, the United States, China, Israel, South Korea, Russia, and Britain are moving toward systems that would give machines greater combat autonomy. 
'The research done by this centre is crucial to the future of our civilisation and of our species,' he said.
CFI is funded by an unprecedented £10 million grant from the Leverhulme Trust.
Its mission is to create an interdisciplinary community of researchers that will work closely with industry and policy-makers.
It is the first centre of its kind that will examine both risks and benefits, short and long-term.
And Professor Hawking joked: 'We spend a great deal of time studying history, which, let's face it, is mostly the history of stupidity.
'So it is a welcome change that people are studying instead the future of intelligence.'


Read more: http://www.dailymail.co.uk/sciencetech/article-3921150/It-late-ban-killer-robots-Researchers-say-autonomous-weapons-possible.html#ixzz4PbsRcgZM
Follow us: @MailOnline on Twitter | DailyMail on Facebook

No comments:

Post a Comment