Chapter 497 Software and Hardware
The new human civilization has had an idea more than once: to create a legendary strong artificial intelligence!
This strong artificial intelligence has self-awareness and can optimize itself and improve algorithms to achieve infinite self-update iterations. In this way, it can infinitely improve its IQ.
Such a species can bring about the "singularity era" of technological explosion in the short term.
This idea is indeed very beautiful and has existed since the Earth era. Let's not worry about the safety of it for now, and assume that it is safe for the time being.
People fantasize about the arrival of the technological singularity, a strong artificial intelligence that can bring eternal life to all humans! With infinite IQ, isn't immortality a piece of cake?
Various masters keep bragging, they calculate a countdown to a beautiful future, there are still thirty years from the singularity era, twenty years, and so on...
But the reality is terrible.
With the development of science and technology, especially after entering interstellar civilization and the explosion of science and technology, people are more and more inclined to believe that this kind of mechanical civilization that can evolve infinitely seems to be non-existent...
In other words, you can only create species that are "stupid" than you, but you can't create species that are "stronger" than you!
This statement is easy to explain.
"We are using our brains to study our brains." Yuriko pointed to her head, thought for a long time and said, "Do you think... our wisdom can fully understand our wisdom..."
It's not a tongue twister, but a... philosophical question.
People's understanding of the brain is based on their thinking. And thinking is based on the brain itself. Similarly, if we write a program that can write programs, can this program write programs as complex as itself?
If humans are a naturally generated program, can they fully explain themselves?
"To paraphrase a doctor: if our minds are simple enough to be understood, we will be too stupid to understand them; conversely, if we are smart enough to understand our brains, they will be too complex for us to reveal."
"We have improved, and our brains have improved."
"If we can't even understand ourselves, how can we fundamentally create a stronger species?"
"So I think that the strong intelligence barrier is indeed likely to be a huge problem that is difficult to crack. Fertility screening can increase the upper and lower limits of intelligence, but it cannot break it. Because it is not a problem of intelligence at all."
"We ourselves, even if we become superhumans, may still not pass."
Hearing her explanation, Yu Yifeng felt sad. This problem has risen to the level of philosophy, which makes people feel very helpless.
This helplessness is a sense of powerlessness from the bottom of their hearts.
No matter how smart or powerful people are, they can only create weaker "artificial intelligence" and it is difficult to achieve infinite iterations.
The strength of weak artificial intelligence lies only in its computing power. Assuming that its computing power is the same as that of the new humans, there is no doubt that it will be completely blown away in all aspects.
Thinking further, this paradoxical problem can be extended to all species in the universe.
Suppose a species A can easily create another stronger species B; then species B, can it easily create a stronger species C?
Then C creates a stronger D...
D creates F...
And so on and so forth, until the species N, how powerful will it be? How smart will it be?
If this is true, the universe would have been full of "god-like" creatures!
But the starry sky is still so dim, the "gods" are not running around, and there is no "mother" god to save everything.
In other words, the proposition of "creating stronger species" is extremely difficult in itself, or even a false proposition, and it is difficult to do within the known scope!
Similarly, this problem can be extended again.
The universe is our tool for research, and everything we can think and experiment with is based on the basic laws provided by the universe.
At the same time, the universe is the object of our research. We study the birth and destruction of the universe, and why the universe is like this now, not like something else.
That is to say... our research, including our imagination, is provided by the universe itself. Being able to approach it is already a remarkable achievement.
"According to this theory, is the world unknowable?" Yu Yifeng said distressedly, thinking of all kinds of YY novels that explode stars with one punch.
Yuriko said softly: "Let's get back to the point, don't get entangled in these more general things... strong barriers are just a conjecture, and have not been thoroughly proven. Our thinking and self-awareness are indeed a black box system. We don't know how to study it, and we don't know how to cut in."
"But I think that in the case of continuous input and output, it is also possible to find out some of the conditions and make a small improvement to the content inside."
"Or... we can turn to the unknown, let us improve things that we don't understand."
Seeing Yu Yifeng thinking hard, she smiled slightly and said: "Such a strong intellectual barrier is not a topic for L4 civilization, not even a topic for interstellar civilization... Let's not think too much for the time being."
"Those who cross the strong threshold can already be called god-level civilization. We temporarily doubt the existence of this god-level civilization... Seeders, right? I don't know."
"What L4 civilization faces is just a weak intellectual barrier, which can definitely be solved."
Specifically, strong barriers involve logical boundaries, paradoxes, mystery, self-awareness, and brain instruction sets.
"Weak barriers" are much simpler, just a biological problem.
The basis of wisdom is the brain, but a brain is only equivalent to hardware and is not equivalent to wisdom at all.
At the moment when a person dies, in that 0.000000001 second, theoretically, the physical properties of the brain do not change drastically, but why do people die like that? Is human death instantaneous or continuous?
If it was instantaneous, it is likely that some "soft" things, such as self-consciousness, died, so the person died.
Strong barriers mainly describe this kind of "software".
At a certain moment, the "software" crashed due to hardware errors, and people died.
The physical structure of the brain itself, as "hardware", can be optimized biologically and is visible and tangible. It is not a black box system like "self-awareness".
“There is no paradox in using software to transform hardware.”
The human brain is a messy piece of equipment that is inefficient, clumsy, and difficult to understand, but it still works. No matter how you look at it, the brain is a poorly designed, inefficient mass that works surprisingly well.
From a biological perspective, some clumps can indeed be improved due to imperfect evolution. This is the origin of the "brain chips" currently used by people.
But the current brain chip is far from enough. It can increase the amount of calculations, but it cannot make people smarter.
“The brain developed a specific solution to a problem long ago, and people continued to use it over the years, or improved it for other purposes, forming a variety of different wisdoms. In the words of molecular scientist François Jacob Here’s the thing: Evolution is a tinker, not an engineer.”
Regardless of evolution or technological means, if the brain's own capabilities can be maximized, that is... the "weak barrier" has been passed.
Fertility screening is a common way to pass weak barriers.
"Hardware is the foundation of wisdom, and software defines the boundaries of wisdom. A software that can play Go cannot play chess. This is an algorithm problem."
"The software possessed by the human brain is more complex than any computer program. It is very powerful and can do almost anything. It looks perfect, but in fact it also has its own boundaries. We cannot consider or think of things outside the boundaries. , can’t feel it.”
"It is precisely because the software is so powerful that the hardware is too weak."
"In other words, strong barriers are the 'software' of intelligent life, the boundaries of thinking, including unsolved mysteries such as self-awareness, and there is basically no possibility of self-improvement."
"Weak barriers are only equivalent to hardware... they are the physical structure of the entire brain. Improving hardware can indeed be achieved by various means."
As long as there is a way to make hardware more powerful, more suitable for software, and gradually bring out the maximum performance of "software", it can be said that the weak barrier has been passed.
If the potential of "software" is fully developed through weak barriers, how smart will this life be?
"However, just this threshold, a weak barrier, is as difficult as climbing to the sky, blocking the vast majority of interstellar civilizations!"
Yu Yifeng sighed: "I don't know whether we have passed the weak barrier or not...Theoretically, the superhuman brain is far from being fully developed. Hmm...Also, has our software been improved by perfection?" ?”
After discussing this, the two people returned to the office with their thoughts on their minds.
There is still a lot of busy work ahead...