Interacting With Intelligent Software
As our software embeds more human intelligence and more human behavior, we can expect to interact with it in a more human fashion. At one point, in the beginning of computing history, we would write programs on some form of media like punch cards or magnetic tape and then feed the media into computer memory, then wait for a line printer to begin chattering output. Today, we use a keyboard and a mouse, and sometimes a screen that can be touched for interaction.
Voice interfaces are beginning to function well but are still primitive and not widely used. Since we communicate most easily with other humans through vocal communication, it seems likely that software voice interfaces will also become a primary tool. Sub-vocal versions of voice interfaces will provide privacy.
As augmented information displays also become widely used, it will possible to accomplish some software use through menu interactions that that are not apparent to anybody other than the user. Menu item selection can be done with eye tracking and blink to click interfaces.
The primary response from software is also likely to be in audio/voice form with the software talking to us in an earpiece or audio implant. Visual information can also be displayed on contact lenses, visors/glasses, or any medium that provides the right contrast ratio.
Direct system to brain interfaces are in primitive development stages, and show great potential. They may encounter unanticipated difficulties and limitations, but if they become widely useful, they will make computer and networked information seem like an annex to our mind.
It is likely that we will be using a blend of all of these forms, according to the situation and the usefulness of the particular interface form.