building self-aware device – part 2

After about 15 months, here’s part 2. You can see part1 here: building self-aware device – part 1

So now we define self-awareness as:

  1. Knows ones own properties and boundary;
  2. Able to learn ones own identity from self-initiated training;

Can we build device that is self-aware? After some thought, we have to say, achieving the most general sense of item 1 is way beyond our reach. Animals learn its own properties and boundary (again) through learning. The learning process correlates the visual signal from eyes,  touch signal from sensors covering the whole body, signals from motor neurons and maybe more. We might be able to build an artificial eye, but currently there’s no technology that come close to a distributed sensing system like the skin and the fur, or a distributed control system like the muscle.

But, if we limit our device to have a rigid body, then it becomes something we can handle, at least to a certain extent.

If the 3D model (the shape, the size) of the rigid body is known. Then with a GPS receiver installed in a fixed point within the rigid body, and a gyroscope to tell the orientation of the device, we basically have a device that knows it’s own properties and boundary. (In our simplified case, both the properties and the boundary are static. Properties are whatever inside the rigid body, boundary is the boundary of the rigid body.)

Following this approach, we might be able to add some moving parts into this rigid body gradually.

<Here further expansion is needed>

Then let’s move to the next step. Let’s put it in front of a mirror. The device has to start some random movement, and then correlate the movement with the movement it sees in the mirror.

For that we need a neural network, the input would be, on the one hand, the instructions for the random movement and on the other hand, the actual movement it sees in the mirror. So this becomes a supervised learning problem.

So we see that building self-aware device is still a long way to go. However, I believe we should be able to experiment it in controlled scenarios, like fully automated driving, and try to push the limit to see how far can we go.