Don't They Realize People Have Been Working On The Issues?

Feel free to post any friendly non-asfr related talk here. No character play.
Post Reply
User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Don't They Realize People Have Been Working On The Issues?

Post by xodar » Mon Aug 29, 2011 5:10 pm

Oklahoma City University professors consider ethics of using robots for eldercare
Oklahoma City University professors Ted Metzler and Susan Barnes recently hosted a workshop to explore the ethics of robot-human interaction, which is an increasingly relevant topic in elder care.
BY KEN RAYMOND [email protected]
Published: August 29, 2011



“Scientists are actually preoccupied with accomplishment. ... They never stop to ask if they should do something. They conveniently define such considerations as pointless. If they don't do it, someone else will. Discovery, they believe, is inevitable. So they just try to do it first. That's the game in science.” – Michael Crichton, “Jurassic Park”


On Aug. 8, a select group of 25 people met at a San Francisco hotel to discuss a topic that used to exist solely in the realm of science fiction.

The workshop, hosted by Oklahoma City University professors Ted Metzler and Susan Barnes and a colleague from New Hampshire, focused on robot ethics, particularly in regard to elder care.

“There are two sides to this,” said Barnes, OCU's chair of Transformative and Global Education. “One is, are we dealing with ethical questions when we assign robots to interact with humans who are elderly and vulnerable? The second is, can robots make ethical decisions, or decisions based on an ethical paradigm? ...

“We had two cohorts represented. One consisted of individuals who are currently being funded by research or commercial entities to build robots to help with elder care. They've bypassed the ethical question. The other group was saying we should stop a minute here and consider personhood and viewpoint and what happens to a person who interacts with a robot instead of a human.

“Can you use a robot as a replacement for human contact? The general answer to that is no.”

Robot-human interaction is an increasingly relevant topic in elder care. As baby boomers age, long-term care facilities, already struggling with insufficient staffing levels, are likely to reach a critical mass.

“Japan has the same situation,” said Metzler, OCU's director of the Darrell Hughes Program in Religion and Science Dialogue. “They have a very large proportion of the population in the upper age range, and they have a relatively low birthrate. ... So they and South Korea have led the way in the development of this technology.

“But it's a problem that is being recognized in other countries. In the U.S., it's been recognized for some time. With a shortage of nurses and an increasing demand for elder care, assistive robotics may be a solution. It is in the process, I would say, of becoming an industry.”

Consider PARO, for example.

PARO is a robotic baby harp seal that simulates the behavior of a real pet, providing the therapeutic benefits of pet ownership without the responsibilities.

The $6,000 robot is covered in soft fur and has exaggeratedly long eyelashes.

It coos, squeaks, moves and sleeps. It responds to touch and speech, knows when it's being held and pouts if it doesn't get enough attention.

So far, the automatons can't be found in Oklahoma facilities. That's likely to change.

Christine Hsu, a company representative, told The Oklahoman in an email that more than 1,500 PARO robots have been sold in Japan and Europe since 2003.

“In (the) U.S.,” she wrote, “we started to introduce PARO since last year, and currently we have users in military retirement communities, Alzheimer associations, nursing homes, assisted living facilities, hospitals, school(s) for autistic children and individuals across the country.”

By most accounts, patients respond well to the robots. The Washington Post reported on an 81-year-old woman who cried and said, “I love her,” when a PARO was put in her lap. An Illinois newspaper, the Herald-News, said some nursing home residents who wouldn't respond to humans immediately played with a PARO; some mistook it for a real animal.

And therein lies the rub.

“To create an entity that seems designed to deliberately fool the person it's interacting with that it cares and has feelings that can be hurt is unethical,” said Susan Anderson, professor emerita of philosophy at the University of Connecticut.

“But what if there is no one human to interact with that person? In that circumstance, it may be the lesser of two evils.”

“Robots do not hold on to life. They can't. They have nothing to hold on with – no soul, no instinct. Grass has more will to live than they do.” – Karel Capek, “R.U.R.”

Anderson and her husband, Mike Anderson, a computer science professor at the University of Hartford in Connecticut, attended the California workshop. They are the editors of a recent book called “Machine Ethics” and have endeavored to program a robot to behave ethically.

The couple work with an Aldebaran Nao, a mass-production humanoid robot that stands more than 2 feet tall and costs about as much as a new economy car. Working with an ethicist, the Andersons developed software that generalizes appropriate responses from a pool of specific cases, effectively giving their robot the ability to make decisions based on experience.

“From an artificial intelligence standpoint,” Mike Anderson said, “it's using machine learning techniques that permit you to get to this generalized principle.”

The Andersons' robot is programmed to remind patients when to take pills. It's a straightforward task until a patient refuses to listen.

That scenario presents an ethical dilemma for the robot. Humans have free will; they can decide whether they want their medicine or not. But if they continue to refuse medications, their lives could be endangered.

“The robot is able, in this case, to weigh those factors and decide when to alert a physician or the human nurse charged with this person's care,” Metzler said. “This is the kind of thing that weighs patient autonomy against the welfare of the patient, the kinds of things that are in the domain of moral reasoning.”

Metzler is no Luddite. He has worked on artificial intelligence applications for the

Army, Navy and Border Patrol and has been a member of the Association for the Advancement of Artificial Intelligence for 20 years.

Even so, he and Barnes worry that interacting with robots could prove dehumanizing, changing how we view ourselves and our obligations to society.

“The technology has to serve human purposes,” Metzler said, “not the other way around.”

Putting a PARO in the hands of a dementia patient concerns Barnes.

“You already have someone who is having difficulty retaining their perspective of the here and now,” she said. “If they're approached by a human artifact (a robot), does it push them further away from reality and decrease their personhood?”

The answer isn't clear. For most of our existence, humans have created items in our own image – baby dolls, for example, or GI Joes. Children can distinguish between real babies and fake ones, even those designed to wriggle and cry. It stands to reason that adults should be able to do the same.

In fact, people don't want to blur the line between human and machine. In 1978, Japanese robot builder Masahiro Mori noticed something surprising. People liked his human-shaped robots until they became too lifelike; then they were regarded as profoundly unsettling. Mori dubbed this disconnect the Uncanny Valley.

Researchers at New Zealand's University of Auckland have documented the Uncanny Valley phenomenon with regard to the elderly.

“They found out from doing focus groups that the elders were very receptive to the idea of the robot detecting falls and reporting them in an emergency situation,” Metzler said. “But they didn't want robots to have human faces. They preferred them to be not tall, roughly 4 feet in height.”

“Let's start with the three fundamental Rules of Robotics. ... We have: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” – Isaac Asimov, “Astounding Science Fiction”

Of course, the prime mover behind the robotics industry is the military. Unmanned combat systems put fewer American lives at risk. (At least one member of the Defense Advanced Research Projects Agency, which is tasked with ensuring American troops have a technological advantage over all foes, attended the workshop.)

From Aug. 16-19, weapons makers showcased their wares at the Unmanned Systems North America exhibition in Washington.

The Wall Street Journal described one of the offerings like this: “One look at the unblinking electronic eye and dark contours of the Modular Advanced Armed Robotic System and it's hard not to think of Skynet, the fictional computer in the Terminator film that becomes aware of its own existence and sends robotic armies to exterminate humans.

“The brawny combat robot ... rolls on tank-like treads. It boasts day and night-vision cameras, a four-barrel grenade launcher and a 7.62 mm machine gun.”

Drone aircraft have been used over Pakistan, Yemen and Libya. A consulting firm mentioned in the Journal article estimated that worldwide spending on unmanned aerial vehicles will nearly double by the end of the decade.

Last year, South Korea posted a non-humanoid robot to stand guard over the border with North Korea. The robot, made by Samsung and equipped with a variety of audio and video sensors, is armed with a machine gun and grenade launcher. It can exchange

passwords with soldiers.

“This raises a number of situations where there might be, for example, a farmer straying into the area who doesn't know the password,” Metzler said. “He could get shot. This kind of situation is a little more stark in its call for responsible, moral behavior.”

The Andersons are troubled by tactical robots, as well.

“We have always said that if we're not comfortable that the robot can behave in an ethically acceptable fashion, then we don't think it should be put out there,” Susan Anderson said. “This includes killer robots.”

Clearly it's too late to interrupt the development of new robotic systems. But Metzler and the others want to make sure ethics are an integral part of technological advances.

“I'm trying personally to raise awareness of these issues,” Metzler said, “because if somebody doesn't, we're just going to slide into a different way of living without thinking about it and wake up someday saying, ‘How did this happen?'”



http://newsok.com/oklahoma-city-univers ... z1WSxPLAHt
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

--NightBattery--

Re: Don't They Realize People Have Been Working On The Issu

Post by --NightBattery-- » Wed Aug 31, 2011 1:06 am

My opinion about robots and ethics...
guns with legs and eyes will only lead to different armies to create their own guns with legs and eyes, like it happen with tanks,nuclear weapons and firearms...i think it's unavoidable once achieved, eventually they may replace human combatants...that could be good, war is unfair anyway (not an excuse).
it is the value of life what should be cultivated above all so you don't need to use that bunch of intimidatory junk ( modern philosophy allows it :< ).
and honestly under modern philosophy we should feaaar robot rights, cute or smart machines could be elevated to quasi citizens so they can consume and use energy for bullshit (just like us! : D) fucking the regenerative capacity of nature.
and by the way have you guys and girls* heard about the WYSIPS photovoltaic plastic? @_@

it's the system of believes what should be changed, not technology.

Post Reply
Users browsing this forum: No registered users and 3 guests