building self-aware device – part 2

After about 15 months, here’s part 2. You can see part1 here: building self-aware device – part 1

So now we define self-awareness as:

  1. Knows ones own properties and boundary;
  2. Able to learn ones own identity from self-initiated training;

Can we build device that is self-aware? After some thought, we have to say, achieving the most general sense of item 1 is way beyond our reach. Animals learn its own properties and boundary (again) through learning. The learning process correlates the visual signal from eyes,  touch signal from sensors covering the whole body, signals from motor neurons and maybe more. We might be able to build an artificial eye, but currently there’s no technology that come close to a distributed sensing system like the skin and the fur, or a distributed control system like the muscle.

But, if we limit our device to have a rigid body, then it becomes something we can handle, at least to a certain extent.

If the 3D model (the shape, the size) of the rigid body is known. Then with a GPS receiver installed in a fixed point within the rigid body, and a gyroscope to tell the orientation of the device, we basically have a device that knows it’s own properties and boundary. (In our simplified case, both the properties and the boundary are static. Properties are whatever inside the rigid body, boundary is the boundary of the rigid body.)

Following this approach, we might be able to add some moving parts into this rigid body gradually.

<Here further expansion is needed>

Then let’s move to the next step. Let’s put it in front of a mirror. The device has to start some random movement, and then correlate the movement with the movement it sees in the mirror.

For that we need a neural network, the input would be, on the one hand, the instructions for the random movement and on the other hand, the actual movement it sees in the mirror. So this becomes a supervised learning problem.

So we see that building self-aware device is still a long way to go. However, I believe we should be able to experiment it in controlled scenarios, like fully automated driving, and try to push the limit to see how far can we go.

Some thoughts on Vehicle Telematics & Big Data application in CRM

From OEMs perspective, every new technology has to serve the ultimate goal, selling more cars.

Traditionally, just like other business department, CRM can only base their decision making on sales related data. In comparison, the data collected directly from the vehicle via telematics has the following advantages:

  • much better data – OEMs get data from customers directly instead of through 3 parties; this means both accuracy and completeness of data.
  • has a much higher frequency – instead of once in a while, we get updates from customers/vehicles constantly;
  • higher flexibility – OEMs are able to change the data collection policy and setup with minimal management overhead;

The same can be said for the other direction – delivering information from OEMs to customers. The advantages of precise targeting are so well established that I don’t even need to list them here.

The only side effect of doing CRM using vehicle telematics is, this will accumulate huge amount of data – Big Data.

Interestingly, for OEMs, the motivation to employ big data hasn’t been strong enough till recently. That’s because, for OEMs, there’s already established practices in traditional business intelligence. The added value of changing to a new technology hasn’t been so obvious or significant.

It’s true that Big Data as a field of special expertise, stemmed from real world need of storing, processing ever increasing amount of data. However, A whole set of technologies were developed to deal with the new challenges posed by the volume of data. Along with this development, the focus of Big Data have changed from volume and velocity towards data mining, artificial intelligence and machine learning. It’s in these aspects where Big Data may provide unique value to CRM compared to traditional BI.

Take the sales pipeline for example:

  1. Campaign
  2. Leads
  3. Opportunities
  4. Sales
  5. Client
  6. Retention

Traditional wisdom dictates that the data in each stages has to be complete, reliable and continuous, otherwise the model won’t make much sense. However, that is actually a limitation of the ability of the tools and the model. Traditional BI is incapable of dealing with incomplete data; the model it based upon cannot handle fuzziness.

However, in real world, incomplete data is the norm, fuzziness is simply the nature of human as oppose to machines. Luckily, techniques have been developed in Big Data to deal with incomplete data and fussiness. With these techniques, CRM system will behave more like human, making predictions based on  incomplete data with probabilities in mind.

Time for ConnectedFlight?

Black Thursday for air lines? Last Thursday, MH-17, yes Malaysia Airline again, was shot down. This Thursday, AH-5017, Algeria Airline, crashed in Mali.

These incidents happened only 4 month after the disappearing of MH-370, for which no one has a clue yet.

Only after MH-370 did people realize that modern air plane is not so modern after all. Inter-continental air plane cruise above oceans are more like ballistic rocket than guided missile.

Actually not all air lines are that primitive. I’ve tried the WIFI service on Emrite’s A380, it’s rather decent. So technically this shouldn’t be a issue. For tracking the position of the air plane, something 16kbps should be enough for each plane. Let’s say at any time there are 50,000 air planes on the sky, we need a 800Mbps satellite link – hmm, shouldn’t be so difficult to get. If I remember correctly, one satellite can typically provide 1Gbps.

With this “ConnectedFlight”, at least I’ll know the position if a air plane disappears. Should I register this trade mark now? 🙂

building self-aware device – part 1

The canonical way to tell whether an animal (or anything) is self-aware is the mirror-test. Put it in front of a mirror, if the animal recognize itself, then it’s self-aware; otherwise it’s not.

But then what exactly is self-awareness? What exactly does the mirror-test test? Can we make self-aware machines? I was pondering on this because I recently realized that vehicles (or in general any device) are considered dumb not only because they have no intelligence, but also because they are not self-aware.

Take a car for example, it’s considered dumb not only because it cannot make any intelligent decision on itself, but also because it will do things that are obviously against it’s own best interest as long as that’s the command from a human being. Would it be interesting if we can build a car that cares itself and avoids crashes and collisions out of its own interests?

Back to the mirror test. Essentially the test tests the ability of an animal to recognize oneself through visual signal of optical reflection. Let’s try to break it down by replacing non-essential part.

First of all,  it seems that there’s no obvious reason we should limit ourselves to visual signal. It’s just one form of signal that a lot of animals can sense easily. A lot of other animals rely on other sensors. For example, bats are known to be able to tell its own ultra sound signal from others. If we’re not limited to visual reflection, then recognizing oneself through reflected/echoed signal is not that difficult. For example, if an device could simply broadcast its own identity through ultra sound like a bat.

We can build a device that broadcast its own identity through ultra-sound, let’s say the identity has a form of a GUID. Now, our device will be able to tell its own signal apart from other signals. Is that enough to be self-aware?

Most cars manufactured nowadays have more than one ultra sound radar built in. They beep when they sense the danger of crashing into something. It seems that it kind of self-aware, but not quite, right?

We have to take a closer look at the mirror test. When a self-aware animal first sees itself through a mirror, it has no prior knowledge about its own appearance in the mirror. Then how exactly does it come to the conclusion that the object inside the mirror is a visual representation of himself? The only possibility is, the animal actually learns that by experimenting.

Wikipedia actually has a full description of the mirror test being conducted the first time:

In 1970, Gordon Gallup, Jr., experimentally investigated the possibility of self
-recognition with two male and two female wild pre-adolescent chimpanzees (Pan
troglodytes), none of which had presumably seen a mirror previously. Each chimpanzee
was put into a room by itself for two days. Next, a full-length mirror was placed in
the room for a total of 80 hours at periodically decreasing distances. A multitude
of behaviors were recorded upon introducing the mirrors to the chimpanzees. Initially,
the chimpanzees made threatening gestures at their own images, ostensibly seeing their
own reflections as threatening. Eventually, the chimps used their own reflections for
self-directed responding behaviors, such as grooming parts of their body previously
not observed without a mirror, picking their noses, making faces, and blowing bubbles
at their own reflections.

From this description it’s obvious that the recognition is a learning process. This observation has several implications:

First of all, because it’s a learning process. It’s very flexible, very adaptive. The animal doesn’t have to stand still in front of a mirror to recognize itself. Even the physical appearance later changes dramatically, the animal will be able to recognize himself again very quickly.

In contract, if a device just broadcast our own identity, then in a noisy environment, then it may have difficulties. Or, if for some reason we have to change the identity, then we also have to change the verification logic.

Secondly, indeed we don’t have to limit ourselves to visual reflection. Voice will also do. Touching will also do. In fact, people that are born to be blind are able to recognize themselves by other senses. And we don’t think they are not self-ware.

Last, a very subtle prerequisite of such a learning process is, the animal has to know its own properties and boundary. Otherwise, the animal won’t be able to know a waving arm is its own or not (with or without a mirror). Every animal knows it’s own properties and boundary (This is my fur, this is my claw, etc),  if we’re trying to design devices to be self-aware, we have to build this capability as well.

So by now I think we can define self-awareness as:

  1. Knows ones own properties and boundary;
  2. Able to learn ones own identity from experiments;