After correctly applying Software Engineering to automatize an user's working environment, the bottleneck of his tasks often shifts away, from the repetitive tasks, to the decision-making parts or to the interaction with the new automatized information system.
The application of software to aid users in their decission-making processes is dealt with in the Artificial Intelligence branch known as Knowledge Engineering.
The methodologies used to reduce the time spent by the user in driving the interface of the information system are studied in the branch of Software Engineering known as User Interfaces. Evidently, most of the work and progress is made specifically for Graphical User Interfaces, although the basics are common to all kind of interfaces.
A good user interface optimizes the criteria of readiness, intuitiveness and accesibility.
Readiness means it needs to present the information needed by the user at the moment as readily as possible. This usually involves having in the same window different panels, each one presenting different sets of data (and maybe some of them using lists, others using trees, others icons, etc.). The effort made by the interface to try and guess which information will an user most probably need, depending on the tasks he's performing, is analogous to the one done by caché memories to bring close to the CPU the information it will most probably need.
Intuitiveness means the user will find the way to do an action for the first time as quick as possible. This imposes, for example, that a command to zoom in is placed in the "view" menu, instead of the "file" menu. Another consequence of this is the need to name things (objects, tasks, ...) in the interface the same way they are called in the specific domain of the application. The mimic of the domain must, indeed, go beyond namings: The UI should follow the flows of actions documented in the Use Cases model. A third consequence of the need of intuitiveness, is the need of coherence: Once the user learns something about the behaviour of the GUI, that knowledge should be exploited by the GUI designers to alleviate the curve of learning of the rest of the GUI. That means exploiting the standard meanings of some existent UI elements, like assigning Ctrl-S to save, or using the usual names for the menus. Furthermore, lack of coherence usually results on unrest in the user (As Joel brilliantly exposes in [www.joelonsoftware.com his blog]).
Accesibility in this context means the most available operations presented by the UI to the user should be the ones he will have to perform more often. In other words, taking into account the penalization in time needed to reach a certain command, and assign those penalizations in a Huffman code's way. For example, in a text editor, the command to turn text bold should be located in the toolbar (the bar with icons), while the command to insert a custom OLE object should be buried in the menus of the menubar. Optimizing this, in an analogue way to market segmentation, can be achieved if the set of commands presented to the user changes according to the state of the application (from which one can infer the task the user is performing). A straightforward example of this happens when you click "print preview" in any MS Office application.
 (although specific to web applications)