In Rogers Sharp and Preece, the authors discuss how the best interfaces translate conceptual models into possible actions while using a product. Conceptual models are made up of: 1. “metaphors and analogies that help people understand what a product is used for and how, 2. the concepts that people create and manipulate through the product, 3. the relationship between those concepts, and 3. the mappings between the concepts and the user experience that the product is designed for (pg. 41). Conceptual models are better when they are more intuitive. Many interfaces are based on conceptual models that mimic real-life situations. For example, the shopping cart in online stores is like a real life shopping cart, and the calculator on a computer may look like a physical calculator. Other examples, like the desktop on operating systems are ubiquitous.
The tendency for graphical interfaces to resemble objects in real life is called skeuomorphism, and has undergone some scrutiny because visual interfaces may be less efficient in space and limiting. What happens when a conceptual model in an interface goes too far? One such example is Microsoft Bob, software released in 1995 that tried to make Windows more user friendly. The interface mimics the rooms of a house featuring different applications and a dog named Rover who helps the user with possible tasks. Microsoft Bob never caught on, however Rover still helped users in the search feature on Windows XP.
The point is that it is possible for interfaces to go too far in mimicking conceptual models of real life. Digital forms of information allow for new ways of organizing that are not always translatable to objects in physical space in real life. However, we may be stuck in our ways of navigating and seeing digital interfaces as being like real life objects. As developments in virtual reality technology improve, will interface design continue in the skeuomorphism direction? I wonder what the future holds for interface design, and what new conceptual models are possible.