How do Robots Challenge Humans?

0 8

The actual debate over “if forex robots would overtake humans” has been heated up through warnings against the potential danger of unregulated development of automated programs from some academic or industrial superstars. Nonetheless, what is missing throughout those warnings is an apparent description of any sensible scenario by which robots could assuredly challenge humans, not as puppets programmed and controlled by humans, since autonomous powers act on their unique “will.” If this type of example would never realistic, then we might see automated programs be used as ruthless get

rid of machines shortly by simply terrorists, dictators, and warlords, as warned by top-notch scientists and experts [1]; we might still not necessarily worry too much about the self-proclaimed demonic threat of automated programs as warned by several elite experts since it is another form of human hazard in the end. However, suppose the sort of scenarios mentioned above could foreseeably be realized in reality. In that case, humans need to get started worrying about how to prevent the peril typically from happening as an alternative to winning debates around imaginary dangers.

The reason men and women on both sides of the issue could not see or present an obvious scenario in which robots could indeed difficult task humans in a very realistic technique is truly a philosophical issue. Up to now, all discussions on the problem have focused on the possibility of making a robot that could be considered a runner in the sense that it could undoubtedly think as a human rather than being solely a tool associated with humans operated with designed instructions. According to this type of thought, we are not concerned about the threat of forex robots to our human species since nobody could, however, provide any plausible cause that it is possible to produce this kind of robot.

Unfortunately, this way associated with thinking is philosophically wrong because people who think in this manner are missing a fundamental stage about our human nature: humans are social creatures.

A crucial reason we could make it what we are now and can do what we are undertaking now is that we live and act as a community. Similarly, when we idea the potential of robots, we should not necessarily solely focus our consideration on their intelligence (which, of course, is so far energized by humans), but also need to take into consideration their sociability (which, of course, would be initially manufactured by humans).

This would further result in another philosophical question: precisely what would fundamentally determine the actual sociability of robots? There can be a wide range of arguments on this issue. But in terms of being in a position to challenge humans, I would believe the fundamental sociable criteria about robots could be defined as comes after:

1) Robots could exchange their views;

2) Robots could help one another to recover from damage or even shut down through necessary procedures, including changes of battery packs or replenishment of other styles of energy supply;

3) Forex robots could produce other robots by exploring, collecting, transporting, and processing raw materials to build the final robots.

Once automated programs possess the above features and start to “live” jointly as a mutually dependent lot, we should reasonably view these people as sociable beings. Interpersonal robots could form groups of robots. Once automated programs could function as defined earlier mentioned and form a community, they’d no longer need to live while slaves of their human owners. Once that happens, it would be the start of history that robots may also challenge humans or start their cause of taking over individuals.

The next question would be: Could be the sociability defined above sensible for robots?

Since not all the functionalities mentioned above can be found (at least publicly) nowadays today, to avoid any unneeded argument, it would be wise to create our judgment based on whether or not any known scientific theory would be violated in any practical attempt to realize any specific functionality among those mentioned above. Conversation with other machines, moving items, operating and repairing device systems, and exploring organic resources are all among these days’ standard practices with designed machinery. Therefore, even though we may not have a

single robot or perhaps a group of single robots that have all the functionalities mentioned above, there is absolutely no fundamental reason for any of the benefits mentioned above to be considered as not producible according to any acknowledged scientific principle, the only thing still left to do would be to integrate individuals functionalities onto 13, 000 whole robot (and as a result a group of single robots).

Considering that we don’t see just about any known scientific principle that might prevent any of those features from being realized, we need to reasonably expect that using money to be invested is time to be spent on the creation of friendly tools, typically as defined earlier can foreseeably become real except if some special efforts to get made by humans on this planet to prevent that from taking place.

Although sociability would be an essential precondition for robots to be able to challenge humans, it might continue not to be sufficient for tools to pose any risk to humans yet. To ensure robots become real risks to humans, they need to have the ability to fight or overcome. Unfortunately for humans, combating the ability of robots could be more accurate than their particular sociability. It is reasonably expected that human manufacturers connected with robots would make excellent work to

integrate as much essentially the most advanced technology available as it can be into the design and development of robots. Therefore, depending on some common knowledge about nowadays technological know-how and what we have already experienced about what robots could complete, we might very moderately expect that an army of programs would be capable of doing the adhering to following:

1) They would be coordinated. Even if scattered worldwide, thousands of robots could be matched up through telecommunication;

2) They can be good at remotely prevailing their weaponry or even the guns of their enemies once they enter the enemy’s defense system;

3) They could “see” and “hear” what happens hundreds or even thousands of miles at a distance, no matter whether it happens in available space or covered-up space, no matter whether the sound is propagating through the air as well as through wire;

4) Although individuals might be able to progress on land, on or within the water, as well as in air, in all of the weather conditions, and move slow-moving or fast as desired;

5) They could react instantly to stimulation, act in addition to attack with high precision, check out through walls or yard earth;

6) Of course, they may identify friends and predators and also make decisions connected with action based upon the locates or the situations they are experiencing;

7) Besides, they are not frustrated by some fundamental people’s natures such as material and sexual desires, jealousy, need for rest, or fear of death. They are poison resistant (no matter for substance or bio poisons) and might even be bulletproof.

In line with the definition of sociability of tools given above, robots inside a community would be able to 1) aid each other to recover from destruction or shut down, and thus it could not be an issue for tools to replace their existing os or application programs when needed, and the same could be valid for the replacement or perhaps the addition of required fresh hardware parts; 2) production new parts for creating new robots, and thus provided that there are designs for new application or hardware, they could create the final products based upon the planning.

The above two points are just what robots could be practically built to do today. However, to ensure robots win the full-scale war against human beings, they need to be able to perform complex logical reasoning when confronting various unfamiliar situations. This could be a more challenging goal than any capability or features mentioned in this creation. There could be two different ways to accomplish this goal.

We might call the 1st way the Nurturing approach, by which humans continue to increase their logical reasoning regarding robots through AI encoding development even after the tools have formed a community. Human beings keep nurturing the community regarding robots in this way until, from one point, they are sufficient to win the full-level war against humans and then set them off to be able to fight against humans. To people with no technical background, this might appear wishful thinking with no assured certainty. Still, people with some essential programming background could see that as long as time and money are usually invested in creating a society regarding robots that could challenge human beings, this is manageable.

The second way would be an excellent Evolution way, by which, from the very beginning, humans create an area of robots that could produce their evolution through program and hardware upgrading. The leading challenge for robots that you can evolve would be how they may evolve through design for improving their software and appliance. Making robots competent to evolve by themselves could be reduced to two a lot of tasks: 1) to enable programs to identify needs, 2) allow robots to make software in addition to hardware designs based upon desires.

The first goal of determining needs could be achieved using recording the history of malfunction to accomplish a previous mission, which will, in turn, be achieved by studying (through some fuzzy reason-type programming) how a prior mission was accomplished. The next goal of designing about needs might be a bit more complex in principle, but still achievable to be fulfilled. This following approach (i. e. the particular Evolution way) would be a more significant challenge than the Nurturing

approach mentioned above. So far, we still cannot see a hundred or so percent certainty for this to happen down the road, even if money and time are put in. However, even if humans did not create an evolutionary community regarding robots, they still may help robots to be intelligent adequate to fight a full-level war against humans in the Nurturing way mentioned above.

You can still find one critical concern left for this writing to reply: why any reasonable mankind would create a socially distinct community of robots having lethal power and help these phones fight against humans instead of which is why they are known as tools or slaves connected with humans?

We need to look at this concern from two different degrees.

First, whether someone who can mobilize and organize learning resources to create a community of cultural robots would indeed offer the intention to do so is a societal issue, which is not under almost any onerous restriction as provided using natural laws. As long as something is likely to happen according to natural laws, we cannot exclude the possibility based on the wishful intentions of all humans.

Secondly, human civilization contains many suicidal genes in itself. Its competition with human society provides enough motives for people capable of doing something to enhance their competitive power to push their creative imagination and productivity to the plafond edge. Furthermore, history features proven that humans usually are vulnerable to ignorance of many likely risks when they are going for dimensions for their benefit. Specifically, once some groups of human beings can do something together with potentially dangerous risks for some and

themselves, many decision-makers or even a single person could make the difference in whether or not they would do it. Since there is no natural law to stop a community of friendly tools with lethal power coming from being created, without sociable efforts of regulations, we would come to a point when we must count on the psychological steadiness of very few or even a person to determine whether tools would threaten human beings or not.

The last question that will remain might be why human beings would possibly make robots to be able to hate humans even if we may create communities of societal robots. The answer could also be as easy as mentioned: in the interests of competition…

Read also: Difference Between Solar Photovoltaic and Thermal Energy Systems


Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.