Explain the Fundamental Principles of HCI Design
Perception is what is seen by the developer and user of an interface. What the user and developer perceive can be different as the user may have a basic understanding of developing interfaces, not understanding the specific underlying details that are taken into account when developing an interface.
The way colours are used is a very important aspect that the designer must consider. Colours should be attractive to the audience, an audience would not want to look at boring plain black and white text as they would not feel engaged with the page at all. Colours should still be suitable for other audiences like colour-blind people as with certain colours they have trouble distinguishing and identifying, so colours need to be taken into consideration for them too. Colours should not be too bright as they could be straining on the user’s eyes making it harder for them to look at the interface.
Luminance is the photometric measure of the intensity of light reflected off a surface. Luminance is an important detail for the designer to keep in mind. They don’t want the interface to be too bright for the user when they are reading the screen as this again causes unnecessary strain to the eye, the designer should make it relaxing for the eye.
Pop Out effect
The pop out effect is an effect that designers use to shift the user’s focus to a specific part. The interface may have lots of different icons and menus, but the designer will use this pop out effect, so the user focuses on the specific part they want them to focus on. Designers do this through a mixture of techniques, they may do this by using colours that are brighter than others, make the shape of a bigger size, or aligning the object in a way that makes it more noticeable.
In HCI patterns are a template that should be used consistently to keep the user familiar with the interface they are using which simplifies it for them. There are 6 pattern templates that are usually followed:
- Proximity – This is the sense of how far/close objects are from each other. If objects are an equal distance away from each other the user may view them as being the same type of object
- Continuity – This is how a pattern is continuous. The human brain interprets and processes patterns that have continuity in them in comparison to patterns that do not have continuity, for example, a straight line opposed to a rigid, jaggedly line.
- Symmetry – Objects should be designed as symmetrical shapes because the brain is drawn more towards them than unsymmetrical shapes. Symmetrical shapes are more consistent which makes our brain drawn to them
- Similarity – A group of objects that are similar are objects that have the same size, shape and other features like colours. For example, on an iPhone all apps are equal distance apart, and they are all squares, so they are the same size and shape.
- Common Groupings – Creating different groups of objects that are the same type.
- Connectedness – Objects that are related have some form of connection, e.g. by a straight line.
These are shapes that are simply 2D or 3D shapes for example a sphere or a cube and are a building block component in the perception of objects that are more complex. The brain finds it easier to interpret these complex objects by breaking the shapes down into these simpler 2D or 3D shapes.
Gross 3D Shapes
These are shapes that are 2D but they give off an illusion making them appear as a 3D shape. This is done by positioning the object in a position that is slightly lower or higher
This is how fast a computer will respond to input from a user. For example, when you move your mouse for the cursor to move on screen, type on your keyboard for the text to appear on screen.
Keystroke Level Model
This is how long it will take a computer to complete user input such as typing on the keyboard or clicking with the mouse. It breaks down the different parts of an operation into individual sections which it measures how long each section takes. With this information parts of the operation can be removed or rearranged to make it more efficient
This is how fast it takes to load an application once a user has clicked on it. This depends of course on the computer’s components; how much memory is currently being used an etc. but with today’s technology it usually takes up to a second for example, opening an app on your phone.
This law is how much time it takes for a user to reach the input object. For example, how long it takes for me to move my mouse, use the scroll wheel and click to reach a certain object on a webpage.
This model is describing the keyboard and its layout for the different functions of the buttons which have been sorted into groups. In the center you have your keys for numbers and text, at the top you have your F keys which can be used as shortcuts for changing settings, and on the right you have your keypad which can be used for things like scroll lock, num lock and capturing the screen.
Buxton Three State Model
This model describes how much effort a user has put into their input device to interact with the interface. The designer should design the interface in a way that makes it easy, responsive and friendly to use their input device to interact with the interface. For example, the user should be able to change some settings like mouse sensitivity.
This model describes how an interface should be designed with all different types of users in mind. It should be ideal for all different types of users to interact with the interface. For example, if somebody is a left-handed mouse user the interface should still be designed in a way that makes it still comfortable for them and easy to use even though most people are right-handed mouse users.
Human as component
The main component of Human Computer Interaction are humans. In the end whatever interface product is designed it will be used by humans. This means that designers must always remember that when they are designing an interface it needs to be optimized to be best used12 by us humans. The designer should also take into consideration humans with disabilities and design their interface, so it suits their needs as well. For example, Voice Recognition was designed with humans who had a disability such as not having arms/hands. They designed it in a way that meant these people can still interact with the interface by using their voice as their limitation was not being able to use their hands/arms
Human Information Processing
This is the theory that humans work in a similar way to computers. The human brain in a way is sort of like the PC hardware for example the CPU as it takes in data and processes it. This processed data then gets sent to our mind which is like the software to understand this information. We also store data as long-term and short term in our brain just like computers do with RAM and hard disk drives. We use this theory to help us decide how we should design certain aspects of an interface
GOMS is an acronym for Goals, Operators, Methods and Selections. It is a model that describes how a user would interact with an interface when they want to perform a task. The ‘Goals’ part refers to the end goal that the user wants to do on the interface. The ‘operators’ part refers to the steps and actions the user will need to complete to reach their end goal. The ‘methods’ part is the different methods and ways the user can use to reach their end goal. The ‘selection’ part is the most efficient way for the user to reach their end goal. For example, a user could instead of using the interface provided way of saving a file, they can use the shortcut provided to them instead which saves a lot more time and is less confusing.
Interfaces need to be designed need to be designed for people with special circumstances. For example, people who are visually challenged. People who cannot see due to blindness can have interfaces specially designed for them like Voice Recognition and Text to Speech. A visually challenged person can interact with an interface using the voice recognition to perform certain actions on and then have the text to speech to read the text on the interface aloud for them understand and interpret.
Design input and output HCIs to meet given specifications
Explain how an HCI could be adjusted for specialist needs
The visually challenged could have assistance in the form of a zoom in tool. This tool could potentially track where the user is specifically looking at with some sort of eye tracker, and then after a short delay it could magnify that area. This would enable the visually challenged to better see text without having to strain their eyes as much. For example, old people who are notorious for reading lots of articles and newspapers could you use this functionality, when they are reading instead of having to strain their eyes to read the small text the tool could detect the current spot they are looking at and magnify it for them making it easier for them to read.
The orally challenged users could have assistance in the form of a tool such as text to speech. Users who are challenged orally such as having a lisp or not being able to speak at all. Text to speech would assist them because instead of them talking they can simply type the text of what they are going to say and then a generic voice will say it aloud for them. With a generic voice speaking in place for the orally challenged, they may feel more comfortable as it is easier to understand the generic voice.
The aurally challenged people could have assistance in the form of text captions and transcripts. When aurally challenged people are listening to audio or watching a video clip, they could have the transcripts/text captions side by side to the video/audio so they can read along whilst listening. For example, YouTube has the option when you are listening to a video for it to give text captions of what is currently being said, however this can be inaccurate at times.
Those that are physically challenged such as being paralyzed, missing limbs or any other form can be assisted by using Voice Recognition. The physically challenged can use the Voice Recognition to perform all sorts of tasks for them. They can navigate their desktop, browse the web, write documents and all sorts of other things. For example, a user could use a Voice Recognition assistant such as Cortana to, open the web browser, navigate to word online, tell Cortana the text that it wants to write and change its size, font etc. Voice Recognition software can typically perform the same actions that standard users can with a mouse and keyboard as input.
Explain the fundamental principles which have been applied to the designs
The colours I used for my HCI are just different shades of blue. I created it like this to keep the interface simple for the user and not to complicate things, I also made it look to take into account that colour blind people may have trouble looking at the application if I used other colours that are known to not be most ideal such as red. Furthermore, I choose blue as it is quite a relaxing colour on the eyes, it does not strain the eyes as much as other bright colours like yellow do.
Buxton Three State Model
The HCI I designed follows the Buxton three state model. My HCI is friendly to use, interactive and responsive. When you load the app you are presented with the main screen where you can search for songs, go to your playlists or find songs through genres. These features of the HCI make it very easy to navigate around the app and you can always return back to the main screen with the back arrow at the top left.
In my HCI I used proximity be keeping the objects an equal distance away from each other, for example when you click on the Home button you are presented with all the different genres, these genres are buttons that are equally placed away from each other and are in alignment to make it more attractive for the user.
I made to sure use shapes that were symmetrical all over in my HCI as the brain is more drawn to symmetrical shapes making it faster and easier for the brain to process. For example, when click on the Your Library button it takes you to all your playlists which are all the same symmetrical shapes.
There it lots of similarity everywhere in my HCI, the similarity makes the brain recognize that the objects are similar because they have the same function. For example, when you click on the Home button you are presented with all the different genres, these genres are shapes that are all similar in shape, size and colour because they all do the same thing; present you with songs that are of the genre you clicked on.
In my HCI I use common groupings in many places to keep the interface easy for the user to understand. For example, when you click on the Your Library button it takes you to all your playlists where all the shapes are the same size, colour, shape and they are ordered and aligned the same, this makes them into common groups which makes the interface less complicate for the user to interact with.