hello, the main problem of all this you are talking about is, PROFIT.
Almost every big industry except a few ones like perhaps nasa (that do not get a big funding compared to others), do this for PROFIT, or in lesser terms for military application, but as that ones are not made by the military themselves are also made by some private corpration again, for profit only.
So if they can cheat they will, how they may cheat is unknown but they are going to use any resource, legal or not (if not legal will be hiden).
On the other side most people is stupid, think honestly about yourself are you are going to find some stupid thing about yourself, i know i have a lot of stupidity on me, even if i consider myself someone logical in nature, i still do stupid stuff that is totally illogical. and humans as mass, is even more stupid. what this had to do is that people make things they do not want to do because they think is they want, now, applied to this subject, means that people will program robots to pretend to be humans even if they are afraid of it. Why, because they are idiots.
Also they will try to give them rights, and that will be the real problem.
I will not debate on how or when ai's will develop being sentient because its impossible even to be acurate of what being sentient is.
now some people says we humans are sentien but if that is true why are we poisoning ourselves and the future of mankind and making wars with other humans. if people were really sentient will see throw that and not do it. But i do not Want to debate about humans but about gynoids or Androids.
For me there can be 3 types of this kind of robots, without even touching the 4th case that is cyborgs, as i am not touching that aspect now.
1) Drones, they are robots controlled by a program, the most problem you can have with them is someone to hack them on set a virus on them. but besides that not a problem even if, this exist today truly in many factories.
2) AI's: this do no exist yet, not at the point i mean, and yes i will back this up in science fiction but a true AI for me should be in some way (not saying tv did this in an acurate form), like in the andromeda series, where the AI is just the main computer of a ship, (not the part where it gets downloaded to a body). on such at first, its sentient but does not really feel. and just obeys because is what its programmed for, even if is always supposed to be sentient.
3) gynoids/Androids: this you can say can be just ai's with a body so why to mix, but what i mean for this is what some writers say as "borrowed personalities". or even "copied personalities". where to make the robot in this case memories of some human where used (and may even possible eventually to just transfer fully all the memories and possible feelings).
none of this is real, but if we do not destroy ourself i think it may be possible.
now this presents a problem and in all that i though and enquired at least 300 people. (i know its not a lot but its more than 5). there are 3 posibilities.
1) Asimov style: robots are just things even if they can think for themselves, this has 2 possible futures.
a) if robots at any point have enough rights to make themselves, the few "personalities" that are mean will overrite all others and really cause a war, a big one that will end in one or both races extint.
b) if robots never have enough rights to change themselves , this can go on forever with a few scatered rogues.
c) And how i will program them if i had to do them, 3 asimov laws + a four law that it gives positive feedback when doing what its suppose to do. if you want it translate it to a human mind, pleasure when they obey.
in any other scenario will depend mostly if.
a) robots cam make themselves
b) robots are made by human deciding factories?
in the case B, if robots get full independance then, they are doomed because, if they are made for profit, who will create a robot just to set it free? answer, only a few weirdos but they will be practically extint. or they will be slaved in the way that they constantly need repairs and have to work for the factory almost full time or get broken.
case A: this is more problematic for humans for 2 reasons, first if they can make themselves they can simply overrun human population. even if they do not want to harm us. of course if they want to harm us because they decide we are illogical and a threat to ourselves thats the worst case.
way's to solve this,
1) feedback when they do something good like obeying, negative feedback when they do something bad,
2) unability of a robot reprograming another robot. (programed to notice other robots and to shut down their processor if they try to reprogram, copy, delete or do anything to themselves or other robot).
3) not giving robots rights, if they were people. let them know they will have no right if they made the change. (and i mean 100% robots not cyborgs).
4) never hide or lie to a robot, they will find the truth and then be unable to know if all you say was not a lie too.
came here babe cyborg.