Thoughts on economics and liberty

Artificial intelligence can never threaten humans. Artificial life can.

There is a great amount of enthusiasm among the technical community about AI and other technologies potentially leading to a singularity when machines become like us, and we therefore start replacing our body with machines.

There is even talk about how the first computer that becomes as smart as us, could threaten our existence (James D Miller).

I don't have time to elaborate on this but by now it should be very clear that I consider LIFE to be DRAMATICALLY different to intelligence. It is the superset of which intelligence is a mere tiny subset.

Life involves those properties which are SUPPORTED by intelligence but which are INDEPENDENT of intelligence, things like "will to life" – but not in any conscious sense. LIFE functions continue regardless of whether we are conscious or not, regardless of whether a dog is an automaton without free will or chooses when to bark (I believe the latter).

So when AI is "perfected" it will be merely like our getting access to the BRAIN of another human, not to the BODY of the human, nor to the human's innate will to exist.

Without such a fundamental set of properties, the smartest AI will always be a tool for humans, never a threat.

However, the smallest life form we create COULD threaten our existence. That's an entirely different ball game.

I don't think ANY scientist has made any progress on creating even the most basic form of life.

Now, if AI could help us do that, then … AI could become a threat (through the life so created).

ADDENDUM

Many computer programs are sensitive to the smallest errors. A semi-colon in the wrong spot is a disaster, but even an extra space can hang the entire program.

Given this extreme sensitivity of computers to the smallest detail, there is little or no chance that AI will somehow "emerge" from human efforts. Even if some AI emerges from basic 'modules' (self-learning programs), the problem is that the computer-housed AI system has to write a further program. This is essentially self-limiting in what computers can do.

On the other hand, even the smallest life form has an innate capacity to self-correct because its programs are not hard-coded but represent "soft" learning. 

Please follow and like us:
Pin Share

Sanjeev Sabhlok

View more posts from this author
3 thoughts on “Artificial intelligence can never threaten humans. Artificial life can.
  1. anupam

    Hi sir i agree that AI or robots no matter how sophisticated can ever replace humans but i don’t agree with your definition of intelligence and life.Mechanical functions like digestions of food,movement ,vision,can be implemented artificially therefore life functions(not life) can be developed for robots,But Life is “Intelligence”.. By word intelligence i don’t mean intellect (ie ability to calculate,plan etc) intelligence is “WILL” and is present in every living being(Although WiLL of a Non-Human is fixed and very small) and more intellegence means more adaptability thats why humans are the most enduring creatures on earth despite there are animals that run faster or survive long periods of hunger.You must have noticed that some day we feel very drained out of energy,feel like giving up and some times we are almost superhuman the reason is our will is fluctuating.therefore WILL is same as life as a person who has no will instantly commits suicide no matter he/she has a healthy body.
    Clearly speaking there are 3 facts:
    1:- Mechanical functions which can be physically measured(life functions or robotic movements) are not life
    2:- Brain works in an infinitely complicated manner but it is stll a computer and has no intelligence but has intellect.
    3:-Computers can’t have “Will” therefore can’t “Do” anything,only a man with will is able to “do”(ofcourse he stops as soon as his will expires).WIll/LIfe/Intelligence is a driver and our body is a vehicle with instruments like feeling,digestion,intellect,vision etc

     
  2. Saransh

    1.Democracies are short sighted.Should there be 10-20% seats reserved for libertarians (whose research papers are peer reviewed) to preserve liberalism in a manner the following article proposes?Can the state be paternilistic and take recourse to affirmative action to preserve freedom, because no doing so seems intellectual arrogance on the part of ivory tower – liberal jholawallas?
    http://aeon.co/magazine/living-together/a-mechanism-to-give-future-citizens-the-vote/

    2.In the present times as well as for all times to come , can the following be the apt defenition of a national terrorist(for people residing in a country) if viewed from the libertarion straightjacket:

    “For all the people not working for the government of the day, an individual/group whom the state fails to punish according to the law of the land for commiting an illegal activity is not a terrorist unless he did so with the aim of establishing a liberally progressive regime, and has the potential and capability of doing so, as satyameva jayate.”

     
  3. Sanjeev Sabhlok

    Anupam

    Here’s a problem with your point No. 3: “Computers can’t have “Will” therefore can’t “Do” anything”.

    Your point No. 2 states that the brain is a computer.

    Accordintly the brain can’t have “will” therefore can’t do anything.

    This brings us back to the dualistic theory – where life/spirit is qualitatively DIFFERENT to the brain by an order of magnitude unfathomable. That’s what Ramesh has been harping on (although in a slightly different way). The only difference being you are calling it “inteligence”, not “consciousness”.

    I’m actually saying the following: the brain is a mechanical computer, life drives it, but LIFE IS MECHANICAL – at a different order of complexity. If the brain is complex at an order of 10, life is complex at an order of 1000. But it is not fundamentally different (e.g. spiritual/ derived from “conciousness”/ intelligence in some way).

    My problem with Kurzweil is not that he is wrong in principle, but that he is excessievly optimistic about AI/ robotics achieving the complexity of the human brain by 2045, leave alone the “will” to live and other qualities essential for life. In other words, I’m not saying that what Kurzweil says won’t come true. I’m saying it will take longer – FAR longer than he anticipates. There are wheels within wheels in this thing called “life” that are not even defined, leave alone understood.

    Unless one masters the mechanics of life, there can be NO threat from mere AI. People need not be ‘scared’ of our puny robots or AI. We’ll “exploit” them, not they, us.

     
Social media & sharing icons powered by UltimatelySocial