Although the 10-year-old girl looks a little unreliable, Fang Zheng still gives her the body of the narrator's sister - after all, it's just maintenance. Moreover, from the dog's point of view, the scientific and technological ability of that world is quite good, but there should be no problem with pure maintenance work.

Fang Zheng went back to his room and began to analyze the program of the narrator's sister.

The reason why he plans to do it by himself instead of giving it to nimfu is that Fang Zheng wants to analyze it through the program of narrator sister, and make some adjustments to the manufacturing of artificial AI. In addition, he also hopes to see what level of AI technology has been developed in other countries. Although not all of them can be used for reference, there are stones from other countries.

"Is Xingye Mengmei..."

Looking at the name of the file displayed on the screen, founder fell into a long thinking. It is not difficult to parse the program itself. Founder has copied nimfu's Electronic Intrusion ability, and he has been learning this knowledge from nimfu during this period of time, so it does not take much time to parse the program itself.

However, when founder took apart the core of the program and decomposed its functions into lines of code, he suddenly thought of a very special problem.

What is the danger of AI? Anyway, is AI really dangerous?

Taking this commentator sister as an example, Fang Zheng can easily find the underlying instruction code of the three laws of robots in her program, and the relationship between these codes has proved to Fang Zheng that what he talked with before was not a living life, but a robot. Her every move, every smile, is controlled by the program, through the analysis of the scene in front of her, and then make the most priority action that she can choose.

To put it bluntly, in essence, this girl's practice is no different from those working robots on the assembly line or NPCs in the game. You choose actions, and it responds to them. Just like in many games, players can increase the value of kindness or malice according to their actions, while NPC will react according to the accumulated data.

For example, it can be set that when the goodness value reaches a certain level, NPC may make more excessive demands on players, and it may be easier for players to pass through a certain area. On the other hand, if the malicious value reaches a certain level, NPC may be more likely to succumb to some requirements of the player, or prevent the player from entering some areas.

But it has nothing to do with whether NPCs like players or not, because the data is set like this, and they don't have this kind of judgment. That is to say, if founder changes the range of this value, then people can see an NPC smiling at the evil players and ignoring the kind and honest players. It also has nothing to do with NPC's moral values, because this is data setting.

So, back to the previous question, Fang Zheng admitted that his first meeting with star wild dream beauty was quite dramatic, and this commentator robot sister was also very interesting.

Let's take an example. When the narrator gives the bouquet made of a lot of non combustible garbage to founder, founder suddenly gets angry, smashes the garbage bouquet into pieces, and then directly cuts the robot sister in front of her in half. What will the robot sister do?

She doesn't cry or get angry. According to her program, she only apologizes to founder, and thinks that her wrong behavior leads to the dissatisfaction of the guests. Maybe she will ask founder to find a staff to repair it.

If you look at this scene in other people's eyes, you will of course feel sorry for the narrator's sister, and think Fang Zheng is a nasty bully.

So how does this difference come about?

In essence, the narrator robot, like the automatic door, escalator and other tools, completes its own work by setting the program. If an automatic door breaks down, it doesn't open when it's time to open, or it closes when you walk by. You don't think that automatic door is stupid. You just want to open it quickly. If he can't open it, he may break the broken door and go away.

If you look at this scene in other people's eyes, they may think that this person is a bit rude, but they will not have any aversion to what he has done, and they will not think that the other person is a bully.

There is only one reason, that is interactivity and communication.

And this is also the biggest weakness of life -- emotional projection.

They project their feelings on something and expect it to respond. Why do humans like pets? Because pets respond to what they do. For example, when you call a dog, it will come and wag its tail at you. And a cat may just lie there motionless, lazy to pay attention to you, but when you touch it, it will also swing its tail, or some cute will lick your hand.

But if you call a table and touch a nail, even if you are full of love, they will not give you any response. Because they have no feedback on your emotional projection, they will not be valued.Similarly, if you have a TV and one day you want to replace it, you will not have any hesitation. Maybe the price and space will be your consideration, but the TV itself is not one of them.

but in the opposite direction, if you add an artificial AI to the TV, you will be able to speak to you when you return home every day, and will tell you what programs are on the way today. When you are watching the program, you will also make complaints about your Tucao. When you decide to buy a new TV, it will complain and say, "why, I'm not doing well. Don't you want me?"

When you buy a new TV to replace it, you will naturally hesitate. Because your emotional projection is rewarded here, and the AI of this TV also has the memory of all the time with you. If you don't have a memory card to move it to another TV, would you hesitate or give up trying to replace it with a new one?

It must be.

But be sensible, brother. It's just a TV. Everything it does is programmed. It's all debugged by businesses and engineers specially for users. They do this to ensure that you will continue to buy their products, and the voice inside is just to prevent you from changing other brands. Because when you say you want to buy a new TV, the AI doesn't mean "I'm sad that he's going to abandon me", but "the owner wants to buy a new TV, but the new TV is not his own brand. According to this logic, I need to start the" pray "program to keep the owner's stickiness and loyalty to his own brand.".

The truth is the truth, and the fact is the truth, but will you accept it?

No.

Because life is emotional, and the inseparability of sensibility and rationality is the consistent expression of intelligent life.

Human beings will always do many unreasonable things, that's why.

So when they feel that AI is poor, it's not because AI is really poor, but because they "feel" that AI is poor.

That's enough. No one cares about the truth.

This is why there will always be conflicts between human beings and AI. AI itself is not wrong. Everything it does is within the scope of its own program and logic processing, and all this is created and delineated by human beings. However, in the process, the emotional projection of human beings has changed, thus gradually changing their ideas.

They will expect AI to respond more to their emotional projection, so they will adjust the processing scope of AI to make them have more emotion and reaction and self-awareness. They think that AI has learned to feel (in fact, it has not), so they can no longer treat them as machines, so they are given the right of self-awareness.

However, when AI have self-consciousness and begin to wake up and act according to this setting, human beings begin to fear.

Because they find that they have made things out of their control.

But the problem is that "out of control" itself is their own set-up instructions.

They think that AI has betrayed them, but in fact, from beginning to end, AI only acts according to the instructions they set. There is no such thing as betrayal. On the contrary, they are just confused by their own feelings.

It's a dead end.

If founder sets out to create an AI, he may be trapped in it. Suppose he creates a little girl's AI, then he will gradually improve her function as he treats his own children, and finally give her some "freedom" because of "emotional projection".

In this way, AI may react completely beyond founder's expectation because it is different from human logic.

At that time, founder's only idea was I was betrayed.

But in fact, it was all his own making.

“………………… Maybe I should think about something else. "

Looking at the code in front of him, founder was silent for a long time, then sighed.

He used to think that it was a very simple thing, but now founder is not so sure.

But before that

Looking at the code in front of him, Fang Zheng stretched out his hand and put it on the keyboard.

Let's do what we should do.

www.novelhall.com , the fastest update of the webnovel!