We have some things voice intelligence via Google services, Google Now but still no dynabook and its 50 or so years later. Part of that problem is the computer language tools we use.
Java is not a Turing complete computer language,in Turing complete languages both data and code are treated uniformly. In the dynabook case of the full AI system what is missing is the crowd-sourcing of AI components or the user generated AI components.
If we had that than the boot-strapping of the system for dynabook than can begin as far as all the user generated AI components being able to be strung to together into something new that has its sum greater than the parts.
But how we do that with the java limitations in place on android as we cannot on-the-fly generate new bytecode. Remember, it would have to small and light-weight.
We also have this neat thing in Android that if I set permissions to not world-wide on an application generated file this than gets stored in the application data folder of an android application and you can write code so that the application and user can change that file.
It would seem that my task in completing a Symbolic library for my scientific android calculator could be extended to do a simplistic LISP engine that runs within the android application context and allows an AI construction as both the data and code of the file get to be modified.
I am not talking about LUA, I am talking about the calculator application being extended with new AI features by the end user and in their control.
In the long term than extend it beyond just calculator functions to become a full android part with the basic understanding of all the android stuff intents, etc.