Tuesday, June 17

AI and Linked Ideas.

When I think of the complexities of creating an artificial intelligence, it seems to me that the most unnerving and pivotal part of the process of creating it would be defining the structure of the ideas that the intelligence would perceive and log. There are so many ways that ideas can be linked. As an example, let's look at two sequentially linked ideas. Here are two sentences in sequence:

The horse ran. It became tired.

In these sentences, I see a lot of different perceivable and logical concepts. I will list them:

1. Horses, as a concept.
2. Tiredness, as a concept.
3. Running, as a concept.
4. The idea of running leading to tiredness.
5. The idea of a specific horse. THE horse.
6. The idea that horses can run.
7. The idea of it happening in the past.
8. The idea of tiredness being a thing that happens somewhat gradually.
9. The idea that all these things can happen in the past.
10. The idea that the horse's current condition is as of yet, unknown.

This is everything I can think of, off the top of my head. For fun, let's attempt to classify these into categories of ideas.

1,2, and 3 seem similar. They are definition ideas, related to meaning.
4 and 8 are consequential ideas, related to cause and effect between the concepts in the sentence.
5 and 7 are circumstantial ideas, related to the specific situation mentioned.
6 and 9 are ideas about potential. These things can happen in the future.
10 is a speculative question, which comes from extrapolating that we knew what happened before, but not in the present.

Any good AI would take each of these factors into account, and more. That, I think, is the challenge of creating a flexible system of ideas which could be called an intelligence. For me, the next question is not how to tackle each of these problems one by one, but how to design a system that inherently addresses these factors just by growing and expanding itself. To investigate this question further, I'll consider the relationship between each of the types of ideas that I found. Perhaps each of these types of ideas can be seen in a structured way:

Definition ideas are ideas of equivalence. The more equivalence a system understands, the more consequential ideas it can have. For example, for a system to understand that the consequence of a person putting their hand in a fire causes pain, The computer must understand that fire + hand = pain.
In other words, fire + hand = burning = pain. This, then, is really two relationships that must be known in order to understand that the consequence of hand + fire = pain.

Experience with circumstances can lead to speculation. Confirmed speculation results in facts of potential. A system that has somehow been aware of other situations where y always follows x (circumstances) can then attempt to predict, or at least pose the question, of whether y is present after x has occurred. Then, if it can be determined that y follows x 67% of the time, then we could say that there is a 67% potential for y to follow x.

Actually, knowledge of potential (prediction) is a bit complicated, however. It is a quantification, a useful thing. But broken down, potential is somewhat related to consequence and definitions. For example, a woman that has good shoes has the potential to run fast, as a consequence of having the shoes. So we could say that a system informed by the definitions and consequential ideas about women and good shoes could extrapolate that the potential to run quickly is present. People or things with good shoes in the past have also moved quickly, and so this strengthens the potential of the woman running fast.

As a last thought about the links between these kinds of ideas, it appears that there are two main concepts at work in the organisation of the ideas that came from our two original sentences. The first concept is that ideas can have a simple form -- where x = y. But, second, x itself is an idea composed of another equivalence, b = c. So we may say that idea x must be two things simultaneously. It is a composition, when we look at it as a summation, and conversely it is also an element forming a composition with something else. It is a web and a part of a web at the same time.

Therefore, the structure of any idea-object in an intelligent system must be constantly in a computable (summable) state, as well as a searchable (or break-down-able) state. It must be more than flexable or flexitive. It must be fluxitive, or fluxable.











1 comment:

Marc Braunstein said...

Hey what's so artificial about this intelligence anyway? Heard Stephen Hawking speak the other day about how IA should be our greatest fear.