Jump to content

The limits of technological potential

From Wikiversity

This original article by Dan Polansky takes a glance at the limits of technological potential or the limits of technology. It does not depend on any deep technical knowledge.

Technological optimists and free market advocates sometimes act as if technology had not limits. It does have limits in the physical world. In general, the physical world exhibits potentialities but also impossibilities. Living things are an expression of one of such potentialities, and human technology realized so far is another one. Some things may be impossible and some seem to be outright so: we may never be able to colonize Mars, travel near another star, transmute iron into gold or achieve human-level silicon sensory and reasoning performance. Some of that may be possible, but we do not know.

Science fantasy

[edit | edit source]

Science-fiction literature would be properly called science-fantasy literature. Example of fiction is Sherlock Holmes, largely realistic. In essence, science-fiction takes acts of magic and coats them in scientific language without regard to practicability. Thus, instead of teleport, we have travel via hyperspace, and then we can travel from one end of a galaxy to another. Hyperspace works around the problem that Milky Way is approximately 100,000 light-years across, so even if we unrealistically assume travel by speed of light, a travel from one end to another would take so many years. Robots are magically equipped with laws of robotics restricting a possible harm without regard for how such a thing could be done.

Wild fantasies of futurology

[edit | edit source]

Some literature not categorized as science-fiction assigns remarkable potential to technology:

  • Change Mars atmosphere, which has almost no oxygen, to make it breathable for humans. Thaw frozen water on Mars for human use.
  • "Upload" a human mind to a human-like robot (android) to achieve extreme longevity. Thus, when the robot wears down, the mind can be transferred to a new robot, as if transferring files between computers.
  • Have artificial intelligence bootstrap itself by improving itself, regardless of possible physical limits of computation.
  • Faithfully simulate living things in a supercomputer. Today, we cannot even faithfully simulate individual atoms of all chemical elements; the reduction of chemistry to physics is not a completed project.
  • Invent replacement for rare minerals found in the Earth.
  • Prevent the death of the universe.
  • Thermonuclear energy. This one is perhaps not so wild but whether it is possible we do not know.

We could not have envisaged the technology before

[edit | edit source]

One may wonder which fantasies are wild and which less so. The technoptimist ultimate stratagem is the argument that since we could not have envisaged electricity, nuclear power and flight past Pluto, we cannot envisage true future technological possibilities either. However, there were some hints that machines could do what animals do: birds can fly, fish can swim and go deep in the ocean, humans can reason and some fish can generate light. This did not tell us we would be able to go past Pluto or communicate using radio waves, though. Proper extrapolative principles to distinguish the realistically possible from the impossible seem to be in need of developing.

A related stratagem is to say that since science has advanced greatly and expanded our understanding of the universe, it will also advance greatly in the future, and we may discovered laws of physics unknown so far, enabling things like travel via hyperspace.

The combination of the two stratagems yields that we do not really know whether there are any limits and what they are. That alone should make us very skeptical about these stratagems. They block a productive use of our current scientific knowledge to point to these limits since that knowledge is not final. The result is a muddled confusion and a lot of magical thinking.

Physical limits

[edit | edit source]

There are physical limits following from our current knowledge:

  • No speed greater than speed of light.
  • No heat engine efficiency greater than the theoretical limit.
  • No car speed greater than a meaningful limit.
  • No aircraft speed greater than a meaningful limit.
  • No device to convert sunlight into new atoms.
  • Very limited conversion of chemical elements to other elements, given the energy requirements; no iron to gold device. Thus, as a good approximation, the stock of atoms under each element is fixed.
  • To build a high building, we need to get the material to a high place, and that costs a physical minimum of energy to get there. Same for energy required for aircraft to get to a higher altitude: there is a minimum required.
  • No reduction of the amount of material needed below the minimum driven by physical requirements:
    • To build a car, a bus, an aircraft or a power plant turbine.
    • To build an apartment building.
    • To build a skyscraper.
    • To build a factory, a dam or an airport.

Technological speedup or slowdown

[edit | edit source]

The 20th century saw a huge technological progress. Some say technology is accelerating, speeding up. However, what really happens is that technology appears, develops, reaches maturity and stabilizes. Thus, there is no linear open ended progress of capabilities; once the technology matures, the progress gets more incremental and less significant or none at all:

  • The invention of the aircraft was a breakthrough. However, compared to that breakthrough, further improvements were less significant. There came propeller, the jet engine and the supersonic Concorde. However, Concorde stopped flying. There is no continued increase in aircraft maximum speed or a continued decrease in time from London to New York. Progress on these fronts has stopped and seems unlikely to be renewed by any magical 21st century technology.
  • The invention of the automobile was a breakthrough. Great improvements continued. However, the improvements became incremental in the overall performance. There is no continued increase in the maximum speed achieved, and from the perspective of everyday car driving, most countries have a speed limit anyway, and in those that don't, there is a practical limit on what is practically safe. No further magical increases are expected. There are no continued linear increases in fuel efficiency.
  • The black and white television was a breakthrough. Compared to that, addition of color was less significant. Then came a switch from CRT to flat panel. Increased screen size is also relatively insignificant. One does not expect continued deeply meaningful progress in screen technology.
  • The rocket technology was a breakthrough. Sending the man to the Moon was a large achievement. Further down the road, one does not expect continued significant increases in rocket speed and fuel efficiency. One does not expect a continued increase of the distance from the Earth to the farthest point reached by a living human as compared to man-made thing. Humans may lay foot on Mars; we'll see.
  • The nuclear energy was a breakthrough. The further improvements were relatively incremental compared to that breakthrough. One does not expect continued linear increases in interesting parameters such as fuel use efficiency.
  • Refrigeration brought a new function, with little deep improvements over time. Refrigeration efficiency may increase over time but far from approaching the original zero energy use of no refrigeration.
  • The computing technology performance was improving exponentially for multiple decades. Some seem to act as if it will continue doing so indefinitely, leading almost necessarily to human-level artificial intelligence. There seems to be nothing necessary about that; there are physical limits to what can be implemented in silicon and whether these suffice for human-level artificial intelligence is not clear.

Limits of living things

[edit | edit source]

Living things also explored potentials for body form and function, in a different sphere, yet one that bears some broad resemblances to human technology. The Darwinian evolution by natural selection led to discoveries of bodily and functional possibilities of plants and animals. Plant heights have been reached and animal body sizes as well. Plants beyond a certain height did not seem viable, whatever the cause. The sphere of living things knows design and performance limits, and has explored them for over 1,000,000,000 of years.

Knowledge of the limits in 21st century

[edit | edit source]

Our ancestors in the 15th century did not have the modern scientific knowledge. They did not have the modern atomic theory and did not know about the size of the atom. It is true that it would have been hard for them to envisage the marvels of electricity, automobiles, airplanes and a flight to the Moon. Were they to produce solid reasons for the existence of the limits, they would not have been able to adduce the facts of modern science.

Arguably, by having advanced our scientific knowledge greatly, we have also advanced our understanding of the limits of the technological potential. For instance, we think that the speedups of silicon computing are limited by the size of the atoms. We have energy conservation laws. We have the speed of light limit from Einstein's theory. To argue that our future technology is to the current one as the current one is to the past one is inconclusive at best.

Closing the material loop

[edit | edit source]

Closing the material loop of human technology to reduce the need to mine minerals is a key potential of concern. Living things have done it for over billion of years. Whether human technology can do it as well is unclear. To say that this will be done by a yet unpredictable technological miracle similar to the one that has already occurred is vague, unspecific and unreliable; there seems to be no obvious deep technological progress in closing the material loop direction.

Technological singularity and intelligence explosion

[edit | edit source]

Some speculate that if artificial intelligence more capable than human intelligence is developed, it will iteratively self-improve, which will lead to "intelligence explosion", to a further rapid increase of its intelligence. One feature of that singularity is that the conditions on the Earth will change so fundamentally that the world after that event is beyond human imagination, as a result of impact of that superintelligence. There is nothing obvious or necessary about it:

  • The multi-decade exponential increase in single-core computing capacity has already slowed down. This impacts predictions based on simple extrapolations. And simple extrapolations are likely to overestimate the capacity growth rate. As a general principle, all technology that humans made so far first grew rapidly and eventually greatly slowed down and matured.
  • The reachable capability is limited by the physical limits of computation. Those seriously arguing for the explosion (more than exponential increase) hypothesis would need to explain why the physical limits of silicon-based intelligence are far greater than the physical limits of human DNA-life-based intelligence.
  • Even if super-human intelligence can be reached, a further acceleration of the increase is far from obvious. Instead of acceleration, there may be a slowdown, depending on the structure of difficulty of making further advances.
  • The combined intelligence and problem solving ability of humans and the computing machinery is in some way already superhuman: human intelligence is greatly augmented by the machinery and is becoming ever more so. And yet, that does not result in intelligence acceleration beyond the observed growth and the impact on other technology is relatively moderate, especially when we compare 20th century inventions with 21st century inventions.
  • Even if super-human intelligence can be reached, its ability to impact the world around it is limited by physics. The ability of intelligence to solve problems rather than create them seems greatly overrated.
  • Notionally, super-human intelligence is not necessarily more "incomprehensible" than human one. Such intelligence is simply somewhat larger than human. And it is not clear what "incomprehensible" even means; human intelligence itself is not particularly comprehensible in so far as we cannot describe in great detail how it works.

The beyond-imagination and knowing argument is a deceptive stratagem, all too likely to block clear thought and make the hypothesis less amenable to critical examination. What can be critically analyzed is the "intelligence explosion" hypothesis. Many aspects of the new condition are far from beyond the imagination: there will be the Sun, the planets, and the Solar System. The superintelligence, if any, will be unable to violate physical and chemical laws: all that we know from physics will hold as true as it does today. There will be the oceans and the land. There will be day and night. There will be rivers and lakes. There will be mines for materials and fossil fuels running out, and the superintelligence, if any, will be unable to change the finiteness of the planetary matter. It is imaginable for that superintelligence, if truly intelligent, to tell the singularians that they were all wrong about the Earth being beyond comprehension from that point onward.

One thing worth noting is the unscientific character of the singularity hypothesis and the intelligence explosion hypothesis: they are not exposed to refutation (falsification) by observation or experiment. Only specific time-bound versions of them are exposed to direct refutation. The hypotheses are largely philosophical, resting on abstract arguments and deriving their appeal from very superficial plausibility, and do not resemble in any way the laws of physics that can be tested by observation or experiment that threaten to refute them. Thus, e.g. Newtonian physics and Einstein's relativity are both universally quantified theories covering a range of astronomical and other physical phenomena, exposed to refutation from them.

Further reading:

History of technological singularity

[edit | edit source]

Selected history of the technological singularity thought:

  • In 1958, Stanislaw Ulam reported on a talk with John von Neumann: "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[1]
  • In 1966, I.J.Good said: "The design of machines is one of these intellectual activities; therefore, an ultra-intelligent machine could design even better machines. [...] there would then unquestionably be an `intelligence explosion' more probable than not within the 20th century."[2][3]
  • In 1983, Vernor Vinge published an op-ed piece in Omni magazine[4]: "We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible."
  • In 1993, Vernor Vinge published The Coming Technological Singularity[5]. He said: "I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. [...]
    • There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
    • Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
    • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
    • Biological science may provide means to improve natural human intellect."
  • In 1998, multiple people have commented on Vinge's singularity, and their comments are in a collection by R. Hanson from 2015 or earlier, a collection containing also some later quotes.[6]
  • In 2005, Kurzweil published book W:The Singularity Is Near. Some quotes are at GoodReads.[7] Using models and data, he calculated that the singularity will arrive around 2045.
  • In 2006, Tim Tyler published The Singularity is Nonsense.[8]. Quotes: 'The problem with using the term "the singularity" is that the phenomena in question doesn't look very singular. [...] This diagram - from Ray Kurzweil - illustrates the idea: [...] The suggestion seems to be that growth will get faster and faster - asymptotically approaching infinity at some particular future point in time. If that was ever to happen, the term "singularity" would be certainly be quite appropriate. However, the idea is a ridiculous one. [...] Using the term "singularity" looks to me like an appalling mistake, however you look at it. The connotations of either something becoming infinite, or only happening once - are far too strong.'
  • In 2008, IEEE published Tech Luminaries Address Singularity[9], showing positions of 10 notables, some of them scientists. Steven Pinker and Gordon E. Moore said the singularity will never occur. Pinker: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems."
  • In 2011, Paul Allen published The Singularity Isn’t Near.[10] A quote: 'Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t. [...] For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs.'
  • In 2013, Luke Muehlhauser published Intelligence Explosion FAQ[11] with copious references and some quotations. Quote from I.J.Gould: 'Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.'
  • In 2022, Alexander K. Seewald published A Criticism of the Technological Singularity[12], arguing that "This story is however based on empirical observations of seemingly exponential processes such as Moore’s law in the semiconductor industry, and contains multiple fallacies concerning self-improvement of intelligent systems (including humans), which upon close look are implausible."

Further reading:

Conclusion

[edit | edit source]

The story about ever increasing technological advances to lead to artificial general intelligence to solve all human problems seems implausible, ignoring not only the limits of implementing that intelligence but also the limits of what that intelligence could technologically achieve as an inventor. Furthermore, the exponential multi-decade increase of computing capacity did not produce anything in the 21st century remotely as significant as the innovations of the 20st century achieved without that capacity: the ability of that capacity to create deep difference in technological problem solving capability outside of computing seems overrated.

The dreams of closing the material loop of technology (very good recycling) and replacing nearly all fossil fuel use with renewable sources are open to doubt.

See also

[edit | edit source]

Books

[edit | edit source]
  • Unlimited Progress: The Grand Delusion of the Modern World by Dennis Knight Heffner, 2010 -- decent book on first impression, containing general analysis as well as specific details in various domains such as transportation

Further reading

[edit | edit source]