A.I. neural nets have reached such complexity we no longer fully understand how each one works.

The systems are self-taught, and the rules and logic they’ve constructed for themselves are in a private language evolved specifically for that task.

To reduce the opaqueness, a potential solution is to create a second A.I. model. Whose neural net is explicitly tasked with explaining the workings of the other.

In humans, it’s the mirror neurons in our brains that are thought to provide us understanding of another’s actions.

We gain an ability for simulation that enables the interpretation of others external actions.

But likely also, provides internal understanding too.

A form of self-reflection, that quite possibly leads to our self-awareness.

A building block of meta-cognition that might be key to consciousness itself.

In machines, when creating a second A.I. and tasking it with analysis and interpretation of the first, what could this form of meta-cognition give rise to?

Two separate entities, however, one thinking about the other, can hardly constitutes selfreflection. Surely.

Would it be different, though, if we put their hardware in the same chassis instead?

Would it help if we drew the arbitrary boundaries in a way that implied one system?