TY - JOUR AU - Barker,, P AB - Abstract One potential barrier to the development of educational hypermedia is the design of current hypermedia authoring tools (HATs) that unfortunately require higher knowledge and skills levels than possessed by most academics. Whilst the usability of hypermedia has been extensively researched, the usability of the tools required to build hypermedia has not. Learnability of HATs, an associated factor of usability, has been similarly neglected. Analysing contemporary approaches to the study of human computer interaction, this paper concludes that they do not support the kind of ‘theory building’ required to study and describe the learnability of HATs. Grounded theory (GT) is posited as an alternative approach, which if applied correctly can provide explanatory theory to inform HAT design. The paper describes the application of GT to two studies of the ease-of-learning of HATs. The first study uses quantitative and qualitative data to explore the experiences of 16 subjects learning to use HATs. In the second study, key HATs are demonstrated to a focus group of IT trainers to analyse their observations of users learning HATs. From these studies a causal model of learnability of HATs that is more detailed and complete than that offered by other contemporary theories of learnability was developed. The paper concludes that applying a GT approach can enhance HCI research through the development of explanatory, extensible and evolutionary theory to inform HAT design. 1 Introduction Interest in the usability of hypermedia has grown proportionately with growth of the World Wide Web (Web) (Smith, 1996; Buckingham Shum and McKnight, 1997). Despite this, Nielsen's (1990) observation of lack of analysis of the learnability/usability of hypermedia authoring tools (HATs) is still true and raises important questions about the most appropriate design metaphor (e.g. book, network, control flow), interface metaphor and underlying operational paradigms. Lack of interest in the learnability/usability of HATs may contribute to the paucity of purposely authored educational hypermedia that are more than just being repositories for course notes. If easy to learn and use HATs, designed specifically for educational purposes, were available then perhaps the development of educational hypermedia would gather momentum. Although the resultant hypermedia produced might not be the most pedagogically sound (Laurillard, 1993 for critique of educational hypermedia), the increased amount of hypermedia being used in the teaching–learning process would prompt research into its design for educational efficacy. The study of the learnability of HATs is required in order to produce better tools for educators. This paper recounts two different studies designed to elicit what happens during the learning of HATs. Specifically, the studies were designed to identify which factors are important in learning HATs and how these factors interact. In the first study, 16 subjects attempted to learn two HATs whilst quantitative data relating to selected independent variables and qualitative data of the subjects' experiences were collected. The qualitative data from the first study was analysed using a grounded theory (GT) (Glaser and Strauss, 1967) approach and informed the design of the second study. The data collected from the second study was incorporated with the data from the first study during the analysis. In the second study, a number of key HATs were demonstrated to IT trainers of academic staff in a focus group. The design of the structure and the question-prompts used in the focus group were largely based on the results of the first study. By exploring both the experiences of users in learning HATs and the experience of IT trainers observing users learning HATs, a model of the process of learning a HAT could be constructed. 1.1 Approaches to the study of human computer interaction Sasse (1997) reviews contemporary approaches to the study of human computer interaction (HCI) and places HCI research in one of three approaches: science; design science; and engineering. In ‘HCI as a science’, general psychological theory is applied using scientific methods of investigation, i.e. hypothesis testing under controlled conditions. Designers complained that scientific method was difficult to apply and did not yield usable or generalisable results. Newell and Card (1985) advocated revising this approach by adopting a more ‘engineering style’ to the process of investigation and theory development. Unfortunately, such approaches epitomised by methods like GOMS (Card, Moran and Newell, 1983) did not yield results that could be generalised. The ‘HCI as design science’ approach also attempts to address the complaints of designers and the problems of applying mainstream psychology (Carroll and Campbell, 1986) for a discussion of this problem). Design science, as epitomised by artefact theory (Carroll and Campbell, 1989), suggests that research proceeds from more informal methods of collecting qualitative data from the use of artefacts, such as programs, to generate wider theories. The design science approach is criticised for lacking appropriate methods and the difficulty of producing legitimate theory from data that may be biased by other considerations, e.g. profit. The third approach ‘HCI as an engineering discipline’ sits between the science and design science approaches. HCI as an engineering discipline relies on the establishment of commonly agreed principles which can be made operational and therefore practical to apply, e.g. the discounted heuristics method (Nielsen, 1994). Problems with the engineering approach are the lack of consensus on what the principles should be and difficulties in establishing legitimate measurements of performance. Sasse concludes her review of HCI research with a list of pointers on how and why HCI research should proceed. She notes that it is important that research produces useful results for practitioners, that the results should be theory building and should be integrated into an accepted wider body of HCI knowledge. Sasse also suggests that GT (Glaser and Strauss, 1967) could be used as framework in HCI research to help generate theory in specific HCI research projects and to integrate other research projects into a meta-theory of HCI. GT is a strategy that can be adopted in the analysis of primary qualitative data but also quantitative data. GT commences with the collection of data without a prior theoretical framework and then proceeds to explore the data to see what patterns, themes or issues can be exposed. Analysis continues through identifying additional evidence in the data that supports the patterns, issues or themes and concludes by looking for evidence to connect them together. The advantage of GT is that it is theory generating not theory testing so it should be possible to produce a theory that fits the data, albeit only for the study in question. GT has been criticised for failing to acknowledge that researchers start GT analysis with assumptions and interests and not as blank slates (Blumer, 1979), however, Strauss and Corbin (1990) have gone some way to address these concerns and have attempted to systematise the influence of prior assumptions and interests in the research process. It is unlikely that approaches of HCI research as science, design science or engineering would lead to an account of the learnability of HATs. HCI as science requires prior theory to predict performance and is not theory generating. HCI as a design science lacks enough rigour to satisfactorily generate an account beyond producing guidelines for next implementation. HCI as an engineering relies on the consensus of the research and practitioner community and hence again is unlikely to produce an account of the learnability of HATs. 1.2 Hypermedia authoring tools There is a distinction between multimedia and hypermedia authoring programs, although in essence most contemporary multimedia authoring tools enable hyperlinks to be added and could therefore be classified as HATs. Another distinction is made between tools which generate links dynamically on the basis of criteria from a content-base (e.g. Microcosm) and tools which enable the purposeful and deliberate linking of nodes together in semantic, procedural or navigational relationships (e.g. ToolBook, HyperCard). HATs can be categorised by the design metaphor used to create the hypermedia: control flow (e.g. Macromedia Authorware); book (e.g. Apple HyperCard and Asymetrix ToolBook); music score (e.g. Macromedia Director); and directory and file structure (e.g. Microsoft FrontPage). In addition to the design metaphor, the interface metaphor should make the HAT as ‘transparent’ as possible to the user (Shneiderman, 1998). Typically MacOS and Windows-based tools all use a standard ‘WIMP’ plus floating palettes approach with applications adopting varying degrees of direct manipulation (Shneiderman, 1998). HATs often incorporate a scripting language so that any required interactivity or functionality unavailable through standard facilities can be programmed. Some HATs provide overviews so that the structure of the hypermedia can be seen as it evolves, e.g. Microsoft FrontPage or Macromedia Dreamweaver. The overview facility may be an implicit part of the metaphor; for example in the control flow metaphor it is possible to see the flow of the whole application with ‘subsections’ encapsulated as single icons. Another important factor is the degree to which the HAT is Web-compatible, i.e. to what degree can applications be integrated with the Web. The key distinguishing feature explored in this paper is the learnability and usability of the HAT. 1.3 Usability Nielsen (1993) presents a high-level model of system acceptability encompassing usability and learnability. The model shows that usability is an interplay of a wide range of factors some intrinsic to a program, such as ‘easy to remember’, some external to the program, such as ‘social acceptability’. ‘Ease-of-learning’ and ‘ease-of-use’ are widely accepted terms —others, like efficiency and memorability, are overlapping concepts and include guessability (Jordan et al., 1991), visibility (Gilmore, 1991), acceptability (Nielsen, 1993; Gilmore, 1991; Davis et al., 1989), transparency (Maas, 1983), utility or effectiveness (Sutcliffe, 1995) and task match (Eason, 1984). There are other factors described in the literature, including Green's (1989) cognitive dimensions. Reliable and predictive usability evaluation is the recurring focus of research with many methods and techniques proposed. The main problems with usability evaluation methods is the subjectivity of the evaluation and the interplay of a legion of factors, including the characteristics of the users, the environment, the sample size of the user group, leading to problems isolating individual factors under examination. As Nielsen (1993) points out, the ‘Holy Grail’ for usability engineers would be the development of analytical methods that allow designers to predict the usability of an application without testing. This goal is concomitant with cognitive psychologists' goal of a predictive model of users and their interactions with computers. 1.4 Learnability The process of learning and the learnability of computer programs has been studied (Napier et al., 1989; Allwood, 1990; Wilson et al., 1990; Davis and Wiedenbeck, 1998), although there does not seem to be any generally accepted model for the dynamics of the learning of computer programs in the literature. There is also an enormous body of research dedicated to human learning, but its relationship with the learning of computer programs seems lacking with Davis and Wiedenbeck's (1998) references to assimilation learning theory (Ausubel, 1968) a rare exception. Many (Preece, 1994; Shneiderman, 1998; Faulkner, 1998), simply equate learnability to the time required to learn some aspect of an interaction with a computer. Roberts and Moran (1983) and Whiteside et al. (1985) found that ease-of-use and ease-of-learning are strongly related and even congruent. Davis et al. (1989) similarly suggest that learning and using are not separate, disjointed activities. Hence, in the absence of a concrete model of learnability it is necessary to consider those elements of usability models that are pertinent to learnability. Many models of usability identify the factors that affect usability but pay scant attention to the interaction of factors, e.g. Green's (1989) cognitive dimensions and Dix et al.'s (1998) framework. Shackel's (1986) four defining criteria of usability (effectiveness, learnability, flexibility and attitude) include widely accepted concepts, although these concepts are not further deconstructed. Kellog (1987) and Tullis (1986) introduced the idea of ‘consistency’ as an attribute of usability and there are references to more generalist concepts like ease-of-use and ease-of-learning (Nielsen, 1993; Eason and Damordaran, 1981). Sutcliffe (1995) uses the idea of ‘utility’ that is similar to Davis et al.'s (1989) ‘perceived usefulness’. However, these again do not probe any deeper into the concept of learnability. Green's (1989) cognitive dimensions can be shown to correspond to each of Shackel's more generic criteria but provide a finer-grained analysis of phenomena of usability. Dix et al. (1998) present a framework for learnability based on a series of system descriptors. Unfortunately Dix et al. provide no empirical evidence to support the framework, although it does have some correspondence to Green's cognitive dimensions. There seems to be direct correspondences between some of Dix et al.'s factors: Green's closeness of mapping—Dix et al.'s familiarity; consistency is the same in both models; secondary notation—substitutivity; visibility—observability. Some the factors are corollaries of the dimensions, e.g. Green's abstract gradient to Dix et al.'s predictability, Green's error-proneness to Dix et al.'s recoverability. Carroll (1998) has studied the principles of the design of instructions that accompany computer programs but not the principles of usability and learnability. A causal model would enable designers to have a better idea of the effects of making changes to one of the sub-factors/factors on other factors and to the learner. 1.5 Learning theories There are many contemporary learning theories but no single accepted theory. Theories often ‘borrow’ ideas from each other so the distinctions between classes of theory become blurred. Contemporary reviews of learning theory (Bigge, 1982; Slavin, 1991), suggest that theories can be ascribed to one of the two paradigms: behavioural or cognitive. Behavioural theories focus on external changes in learner behaviour as a result of external stimuli. Cognitive theories are more concerned with the internal cognitive processes of knowledge acquisition—so Bruner and Anglin's (1974) simultaneous processes of acquisition, transformation and checking of the pertinence of knowledge corresponds with Lindsay and Norman's (1977) processes of accretion, restructuring and tuning of knowledge. Accretion is the process of filling existing mental schema, restructuring is the process of creating new schema to accommodate new knowledge and tuning is the process of making minor adaptations to schema. Other theories (Bandura, 1971; Gagne, 1977) are eclectic in that they contain elements of both behaviourist and cognitive paradigms. Gagne (1977) has extended the information-processing model of learning and memory into a theory of the events of learning. In Gagne's model, the external phases of learning are paired with the cognitive processes taking place in a learner's mind plus the external instructional events for each phase. Recently, interest has been growing in constructivist learning theory (Vygoskii, 1962; Von Glasenfeld, 1984) starting from the principle that deep learning only takes place when carrying out ‘authentic’ tasks and emphasises the social context of learning (Bandura, 1971). Contemporary HCI research has tended to adopt a cognitivist stance to learning. 2 The studies 2.1 Study 1: comparing the learnability of two HATs The purpose of this study was to identify the issues affecting the learnability of HATs when used by academics. Two HATs were specially developed for minimal functionality to have minimal detail in their interface. One HAT was based on the book-design metaphor called TBM and the other on the concept-map design metaphor called SHAPE (Elliott et al., 1996). The concept-map design metaphor was hypothesised as being easier to learn. The book design metaphor was chosen because of its familiarity. Fig. 1 shows the concept-map based design metaphor HAT being used to develop a concept-map of the subject ‘Information Technology’. The shaded concept-map nodes in Fig. 1 represent links to further sub-maps which deconstruct concepts further. Node with no sub-maps link to the contents for that node as shown in Fig. 2. The book-based design metaphor was a simplified version of Asymetrix ToolBook which allowed pages like those shown in Fig. 2 to be added using a simply dialogue box and an easier mechanism for creating hyperlinks than the normal ToolBook mechanism as shown in Fig. 3. A group of 16 HE teaching staff agreed to participate in the study. Fig. 1 Open in new tabDownload slide The concept-map based design metaphor HAT being used to develop a concept-map for the subject-domain ‘IT’. Fig. 1 Open in new tabDownload slide The concept-map based design metaphor HAT being used to develop a concept-map for the subject-domain ‘IT’. Fig. 2 Open in new tabDownload slide Contents for a node called ‘definition’ using the concept-map based design metaphor HAT. Fig. 2 Open in new tabDownload slide Contents for a node called ‘definition’ using the concept-map based design metaphor HAT. Fig. 3 Open in new tabDownload slide A page in the book-based design metaphor HAT. Fig. 3 Open in new tabDownload slide A page in the book-based design metaphor HAT. A pilot study was conducted with five subjects in order to refine the method and verify the face validity of the measures used (Elliott, 1999). Subjects in the full study were teaching staff at a higher education institution in South Wales, none of whom had previously constructed hypermedia. The experiment involved subjects using both HATs to construct a simple but clearly useful hypermedia application. A simple application was chosen so that the focus was on the learning of HATs rather than issues in relation to the complexity of the knowledge domain. Subjects were not allowed to note anything on paper prior to using either HAT so that the intrinsic learnability of the tools were being examined rather than the note making skills of the users. Along with gender, age and previous experience of concept-maps, the other independent variables measured were: ‘Prior Computing Skill’; ‘Spatial Ability’; ‘Base Motivation’; ‘Order of Use of Packages’. The dependent variables were: ‘Ease-of-Learning’; ‘Ease-of-Use’; ‘Task match’; ‘Motivation-to-continue’, measured using Likert scale-based questions. An objective measure for ease-of-learning and ease-of-use to compare the two metaphors was required. There are many references to measures for ease-of-learning and ease-of-use for computer interfaces/programs in standard textbooks on usability (Davis et al., 1989; Napier et al., 1989). This study adapted the measure developed by Molnar and Kletke (1996) and distinguished ease-of-learning from ease-of-use, using a bank of 26 Likert statements to which subjects had to state their level of agreement. This statement bank contained four statements reflecting how subjects rated each HAT on its capabilities in enabling the task to be completed (Task Match). The validity of the adapted Molnar and Kletke measure was checked during the pilot study. After each of the subjects in the pilot had been through the session the values for ease-of-learning, ease-of-use and task match calculated from the subjects answers to the Likert statements were presented to the subjects who were then invited to comment on whether they felt they matched their perceptions of the two HATs in terms of ease-of-learning, ease-of-use and task match. Subjects were happy with how the values calculated from their answers reflected their perceptions and experiences of the two HATs, so no further adjustments were made to the instruments. More pertinent to the investigation of learnability was the qualitative data collected by audiocassette recorder during the sessions and subsequently transcribed for analysis. During the sessions subjects were invited to make comments and ask any questions they wished, corresponding to question-asking protocols (Kato, 1986). Prompted recall (Barnard et al., 1986; Briggs, 1988) was also employed at the end of each session to try and capture the subjects' experiences during the sessions. Subjects were shown a number of screen dumps of key stages in their use of the two HATs and invited to comment on what was happening and what their experiences were at each stage. After this, subjects were invited to answer a number of open questions to elicit their overall feelings and experiences of learning the HATs. The technique of ‘hierarchical categories’ (Richards and Richards, 1994), embodied in NUD.IST (Richards and Richards, 1991) and compatible with the principles of GT, were used to analyse the qualitative data contained within the transcripts. Hierarchical categories offer a systematic approach to analysing unstructured qualitative data. Hierarchical categories have been extensively used in the analysis of other types of study and were considered appropriate for the analysis of similarly complex HCI data. The process of analysis with NUD.IST involves systematic examination of the qualitative data using the method of constant comparison. If a fragment of data (a data bit—phrase, sentence, paragraph or even a comment or annotation made by the researcher) merits it, a new category is created and the corresponding data indexed to it. Alternatively, a data bit might be indexed to a previously created category or even several categories. NUD.IST has a number of utilities allowing sophisticated searches to be conducted; for instance comments on ease-of-learning can be cross-referenced with the subjects' computer skill and the results indexed to a new category. In GT, the process of categorising the data is called ‘open coding’. The process of categorising the data is problematic: should one start with pre-defined categories (Jones, 1985) or adopt a GT approach (Strauss and Corbin, 1990) and identify categories during the analysis? Often a middle ground is adopted (Dey, 1993) with some pre-defined categories and the production of categories that emerge as the data is analysed. After each session, certain passages in each transcript were automatically indexed to pre-defined categories corresponding to the independent variables using NUD.IST's command file utility. Some of the independent variables had to be recoded from interval to ordinal data to make them appropriate for NUD.IST. This initial categorisation was done to provide a useful structure with which to explore the data rather than as a serious method of categorisation. Following this process, each transcript was repeatedly examined and data bits were indexed to categories in the manner described above. It was important that the knowledge domain for each subject was equally familiar for each subject so that the task would be as similar as possible for each subject. The task chosen was to construct a skeletal hypermedia containing at least three hyperlinks describing the faculty, its courses and staff to potential students, i.e. an on-line prospectus. All subjects were situated in the same faculty so their understanding of the term ‘faculty’ would be similar, hence making the task equitable for all subjects. It was not necessary to add content in the form of text and other media only the overall structure as these techniques are the same for both HATs. 2.1.1 Quantitative results The details of the quantitative analysis can be found in Elliott et al. (1997) and in depth in Elliott (1999). Overall, the statistically significant conclusions were that: subjects rated the diagrammatic nature of the concept-map HAT higher for learnability and usability than the book-based metaphor. the higher rating of the ‘learnability’ of the concept-map based HAT was independent of the subject's computing skill, whereas the learnability rating of the book-based HAT positively correlated with the computing skill of the subject. subjects created more links and nodes with the concept-map based HAT. A subsequent content analysis indicated that the structure of the links and nodes created from the concept-map based HAT matched the task well. the subjects' declared understanding of the concept-map based HAT was significantly higher than the book-based HAT; the subjects' spatial relation score correlated with the number of nodes and links produced with the concept-map based HAT but not for the book-based metaphor; subjects were more motivated to continue with the concept-map based HAT but there was no relationship between their extrinsic and intrinsic motivation scores and the dependent variables. The qualitative results help to affirm the hypothesis that concept-maps are easier to understand and use than typical WIMP-based authoring tools and reduce the cognitive load associated with learning HATs. 2.1.2 Qualitative results Qualitative data analysis led to the identification of four factors defined in Table 1, i.e.: transparency of operation, transparency of purpose, accommodation and accomplishment. Further analysis indicated that transparency of operation could be further sub-divided into operational momentum, logic of operation, noise/economy of dialogue, mental model match, external consistency and internal consistency and transparency of purpose into task match and instantaneity as described in Table 2 and evidenced by the examples of data bits in Table 3. Tables 1 and 2 also indicate impact of each factor for example poor operational momentum results in longer completion times. Table 4 includes evidential data bits supporting the relationship between the factors. The meaningfulness of these factors and their relationship to each other are discussed in 3.2 below. Table 4 Evidential data bits for the factors Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “Was there anything confusing about using TBM?” Subject 1: “Having to come to it cold and quickly getting up and running with it there was a little confusion but that was just the learning process and getting used to the package.” Researcher: “How well do you think TBM would allow you to produce an application for your own teaching?” Subject 2: “It seems a bit early to say, you seem to need to do a lot more—I don't feel as confident.” Researcher: “Was there anything about TBM that made it easy to use?” Subject 3: “ …I was rather frightened of it to start with—if you hit F3 there's nothing in my brain that tells me that F3 is a key thing to hit.” Accommodation and accomplishment Researcher: “After having learnt to use TBM do you have any general thoughts on what makes a program easy to learn?” Subject 1: “If you knew you were going to get some output from it and had a requirement to use a particular package—you were going to get the results reasonably quickly and easily, you're going to continue.” Researcher: “How do you think SHAPE and TBM compare with each other?” Subject 2: “With SHAPE you see the results straight away and you can amend it…The end product of SHAPE will be more appealing because you can build up the maps.” Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “Was there anything confusing about using TBM?” Subject 1: “Having to come to it cold and quickly getting up and running with it there was a little confusion but that was just the learning process and getting used to the package.” Researcher: “How well do you think TBM would allow you to produce an application for your own teaching?” Subject 2: “It seems a bit early to say, you seem to need to do a lot more—I don't feel as confident.” Researcher: “Was there anything about TBM that made it easy to use?” Subject 3: “ …I was rather frightened of it to start with—if you hit F3 there's nothing in my brain that tells me that F3 is a key thing to hit.” Accommodation and accomplishment Researcher: “After having learnt to use TBM do you have any general thoughts on what makes a program easy to learn?” Subject 1: “If you knew you were going to get some output from it and had a requirement to use a particular package—you were going to get the results reasonably quickly and easily, you're going to continue.” Researcher: “How do you think SHAPE and TBM compare with each other?” Subject 2: “With SHAPE you see the results straight away and you can amend it…The end product of SHAPE will be more appealing because you can build up the maps.” Open in new tab Table 4 Evidential data bits for the factors Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “Was there anything confusing about using TBM?” Subject 1: “Having to come to it cold and quickly getting up and running with it there was a little confusion but that was just the learning process and getting used to the package.” Researcher: “How well do you think TBM would allow you to produce an application for your own teaching?” Subject 2: “It seems a bit early to say, you seem to need to do a lot more—I don't feel as confident.” Researcher: “Was there anything about TBM that made it easy to use?” Subject 3: “ …I was rather frightened of it to start with—if you hit F3 there's nothing in my brain that tells me that F3 is a key thing to hit.” Accommodation and accomplishment Researcher: “After having learnt to use TBM do you have any general thoughts on what makes a program easy to learn?” Subject 1: “If you knew you were going to get some output from it and had a requirement to use a particular package—you were going to get the results reasonably quickly and easily, you're going to continue.” Researcher: “How do you think SHAPE and TBM compare with each other?” Subject 2: “With SHAPE you see the results straight away and you can amend it…The end product of SHAPE will be more appealing because you can build up the maps.” Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “Was there anything confusing about using TBM?” Subject 1: “Having to come to it cold and quickly getting up and running with it there was a little confusion but that was just the learning process and getting used to the package.” Researcher: “How well do you think TBM would allow you to produce an application for your own teaching?” Subject 2: “It seems a bit early to say, you seem to need to do a lot more—I don't feel as confident.” Researcher: “Was there anything about TBM that made it easy to use?” Subject 3: “ …I was rather frightened of it to start with—if you hit F3 there's nothing in my brain that tells me that F3 is a key thing to hit.” Accommodation and accomplishment Researcher: “After having learnt to use TBM do you have any general thoughts on what makes a program easy to learn?” Subject 1: “If you knew you were going to get some output from it and had a requirement to use a particular package—you were going to get the results reasonably quickly and easily, you're going to continue.” Researcher: “How do you think SHAPE and TBM compare with each other?” Subject 2: “With SHAPE you see the results straight away and you can amend it…The end product of SHAPE will be more appealing because you can build up the maps.” Open in new tab Table 3 Evidential data-bits supporting the factors transparency of operation and transparency of purpose and their sub-factors Factor . Sub-factors . Examples of evidential data bits . Transparency of operation Operational momentum “What I quite liked was when I typed the words in it came up automatically with a series of boxes—you could see what was going on. I wasn't quite sure how I was going to move them around but I felt I had to move them around so I could see the next logical stage.” “It's straightforward, it's knowing what to do next.” Logic of operation “Straightforward to follow.” “the confusion I had was the operations—what I had to press.” Noise/economy of dialogue “I understand the concept but I don't like the screens, I don't like the way that it says I am on page 3, What's page 3? What's page 1 and 2?” Mental model match “It's the way I think I suppose when I'm creating structure—I like to keep the structure in my mind and this is a structure I'm creating.” “I automatically wanted to structure it with main headings and subheadings but nothing allowed me to do that.” External consistency “Consistent with other programs.” Internal consistency “A program that once you've learned the fundamentals, the bells and whistles follow along the same pattern.” Transparency of purpose Task match “The result of the prototype didn't give me what I wanted.” Instantaneity “It doesn't seem to offer so much so quickly as SHAPE. It got there eventually.” Factor . Sub-factors . Examples of evidential data bits . Transparency of operation Operational momentum “What I quite liked was when I typed the words in it came up automatically with a series of boxes—you could see what was going on. I wasn't quite sure how I was going to move them around but I felt I had to move them around so I could see the next logical stage.” “It's straightforward, it's knowing what to do next.” Logic of operation “Straightforward to follow.” “the confusion I had was the operations—what I had to press.” Noise/economy of dialogue “I understand the concept but I don't like the screens, I don't like the way that it says I am on page 3, What's page 3? What's page 1 and 2?” Mental model match “It's the way I think I suppose when I'm creating structure—I like to keep the structure in my mind and this is a structure I'm creating.” “I automatically wanted to structure it with main headings and subheadings but nothing allowed me to do that.” External consistency “Consistent with other programs.” Internal consistency “A program that once you've learned the fundamentals, the bells and whistles follow along the same pattern.” Transparency of purpose Task match “The result of the prototype didn't give me what I wanted.” Instantaneity “It doesn't seem to offer so much so quickly as SHAPE. It got there eventually.” Open in new tab Table 3 Evidential data-bits supporting the factors transparency of operation and transparency of purpose and their sub-factors Factor . Sub-factors . Examples of evidential data bits . Transparency of operation Operational momentum “What I quite liked was when I typed the words in it came up automatically with a series of boxes—you could see what was going on. I wasn't quite sure how I was going to move them around but I felt I had to move them around so I could see the next logical stage.” “It's straightforward, it's knowing what to do next.” Logic of operation “Straightforward to follow.” “the confusion I had was the operations—what I had to press.” Noise/economy of dialogue “I understand the concept but I don't like the screens, I don't like the way that it says I am on page 3, What's page 3? What's page 1 and 2?” Mental model match “It's the way I think I suppose when I'm creating structure—I like to keep the structure in my mind and this is a structure I'm creating.” “I automatically wanted to structure it with main headings and subheadings but nothing allowed me to do that.” External consistency “Consistent with other programs.” Internal consistency “A program that once you've learned the fundamentals, the bells and whistles follow along the same pattern.” Transparency of purpose Task match “The result of the prototype didn't give me what I wanted.” Instantaneity “It doesn't seem to offer so much so quickly as SHAPE. It got there eventually.” Factor . Sub-factors . Examples of evidential data bits . Transparency of operation Operational momentum “What I quite liked was when I typed the words in it came up automatically with a series of boxes—you could see what was going on. I wasn't quite sure how I was going to move them around but I felt I had to move them around so I could see the next logical stage.” “It's straightforward, it's knowing what to do next.” Logic of operation “Straightforward to follow.” “the confusion I had was the operations—what I had to press.” Noise/economy of dialogue “I understand the concept but I don't like the screens, I don't like the way that it says I am on page 3, What's page 3? What's page 1 and 2?” Mental model match “It's the way I think I suppose when I'm creating structure—I like to keep the structure in my mind and this is a structure I'm creating.” “I automatically wanted to structure it with main headings and subheadings but nothing allowed me to do that.” External consistency “Consistent with other programs.” Internal consistency “A program that once you've learned the fundamentals, the bells and whistles follow along the same pattern.” Transparency of purpose Task match “The result of the prototype didn't give me what I wanted.” Instantaneity “It doesn't seem to offer so much so quickly as SHAPE. It got there eventually.” Open in new tab Table 2 Description of the transparency of operation and transparency of purpose sub-factors identified in the qualitative data analysis . Sub-factors . Description . Observation . Impact of factor . Transparency of operation Operational momentum The summative aspects of a HAT that leads the user on to the next stage, iteratively if necessary When inputting a new concept-map node in SHAPE subjects are not led to create links to other nodes Poor operational momentum leads to longer completion times Logic of operation The degree to which the completion of tasks and sub-tasks whilst using the HAT have an internal logic, so that users can ‘guess’ what action they need to take next Some subjects said that they found TBM confusing and not easy to remember Poor Logic of Operationleads to longer completion times and tendency not to re-use HAT Noise/economy of dialogue The perceived level of unnecessary information or complexity in completing a task or sub-task. The identification of this factor corroborates Kozma's (1992) assertion that complexity of tool must be keep to a minimum TBM displayed unnecessary tool bars and palettes. Some of these could be turned off but compared to minimalism of SHAPE some subjects still said that they were distracted High Noise leads to errors, slowness in completing tasks Mental model match The degree to which the HAT can accommodate a range of user models of what a HAT is and how a HAT should operate Some subjects saw TBM as a word processor but clearly its functioning was not like a word processor Poor mental model match leads to longer learning times External consistency The degree to which a HAT is consistent with other HATs Standard menu items like ‘File’ and ‘Edit’ facilitated subject navigation Poor external consistency leads to longer learning times Internal consistency The degree to which the internal operations/functions are consistent with each other Subjects could create sub maps in SHAPE once they had created the top level map, i.e. they could begin to work with a HAT if all operations follow a similar pattern and successfully guess what to do next Good Internal Consistency results in efficient use of HAT Transparency of purpose Task match The degree to which a HAT is able to produce something that matches the users' expectations. task match is an accepted concept (Eason, 1984) Although subjects could create concept-maps they could only partially see how it related to the task, i.e. they could create maps easily enough but were not sure what use they were Poor task match leads to confusion and slower learning and completion times Instantaneity The property of a HAT that gives the user an immediate sense of what the end-product will be like. (Most web-authoring tools are extremely bad at this) When creating links in TBM users couldn't use them without changing the screen mode and then became frustrated Poor Instantaneityleads to frustration and tendency not to re-use the HAT . Sub-factors . Description . Observation . Impact of factor . Transparency of operation Operational momentum The summative aspects of a HAT that leads the user on to the next stage, iteratively if necessary When inputting a new concept-map node in SHAPE subjects are not led to create links to other nodes Poor operational momentum leads to longer completion times Logic of operation The degree to which the completion of tasks and sub-tasks whilst using the HAT have an internal logic, so that users can ‘guess’ what action they need to take next Some subjects said that they found TBM confusing and not easy to remember Poor Logic of Operationleads to longer completion times and tendency not to re-use HAT Noise/economy of dialogue The perceived level of unnecessary information or complexity in completing a task or sub-task. The identification of this factor corroborates Kozma's (1992) assertion that complexity of tool must be keep to a minimum TBM displayed unnecessary tool bars and palettes. Some of these could be turned off but compared to minimalism of SHAPE some subjects still said that they were distracted High Noise leads to errors, slowness in completing tasks Mental model match The degree to which the HAT can accommodate a range of user models of what a HAT is and how a HAT should operate Some subjects saw TBM as a word processor but clearly its functioning was not like a word processor Poor mental model match leads to longer learning times External consistency The degree to which a HAT is consistent with other HATs Standard menu items like ‘File’ and ‘Edit’ facilitated subject navigation Poor external consistency leads to longer learning times Internal consistency The degree to which the internal operations/functions are consistent with each other Subjects could create sub maps in SHAPE once they had created the top level map, i.e. they could begin to work with a HAT if all operations follow a similar pattern and successfully guess what to do next Good Internal Consistency results in efficient use of HAT Transparency of purpose Task match The degree to which a HAT is able to produce something that matches the users' expectations. task match is an accepted concept (Eason, 1984) Although subjects could create concept-maps they could only partially see how it related to the task, i.e. they could create maps easily enough but were not sure what use they were Poor task match leads to confusion and slower learning and completion times Instantaneity The property of a HAT that gives the user an immediate sense of what the end-product will be like. (Most web-authoring tools are extremely bad at this) When creating links in TBM users couldn't use them without changing the screen mode and then became frustrated Poor Instantaneityleads to frustration and tendency not to re-use the HAT Open in new tab Table 2 Description of the transparency of operation and transparency of purpose sub-factors identified in the qualitative data analysis . Sub-factors . Description . Observation . Impact of factor . Transparency of operation Operational momentum The summative aspects of a HAT that leads the user on to the next stage, iteratively if necessary When inputting a new concept-map node in SHAPE subjects are not led to create links to other nodes Poor operational momentum leads to longer completion times Logic of operation The degree to which the completion of tasks and sub-tasks whilst using the HAT have an internal logic, so that users can ‘guess’ what action they need to take next Some subjects said that they found TBM confusing and not easy to remember Poor Logic of Operationleads to longer completion times and tendency not to re-use HAT Noise/economy of dialogue The perceived level of unnecessary information or complexity in completing a task or sub-task. The identification of this factor corroborates Kozma's (1992) assertion that complexity of tool must be keep to a minimum TBM displayed unnecessary tool bars and palettes. Some of these could be turned off but compared to minimalism of SHAPE some subjects still said that they were distracted High Noise leads to errors, slowness in completing tasks Mental model match The degree to which the HAT can accommodate a range of user models of what a HAT is and how a HAT should operate Some subjects saw TBM as a word processor but clearly its functioning was not like a word processor Poor mental model match leads to longer learning times External consistency The degree to which a HAT is consistent with other HATs Standard menu items like ‘File’ and ‘Edit’ facilitated subject navigation Poor external consistency leads to longer learning times Internal consistency The degree to which the internal operations/functions are consistent with each other Subjects could create sub maps in SHAPE once they had created the top level map, i.e. they could begin to work with a HAT if all operations follow a similar pattern and successfully guess what to do next Good Internal Consistency results in efficient use of HAT Transparency of purpose Task match The degree to which a HAT is able to produce something that matches the users' expectations. task match is an accepted concept (Eason, 1984) Although subjects could create concept-maps they could only partially see how it related to the task, i.e. they could create maps easily enough but were not sure what use they were Poor task match leads to confusion and slower learning and completion times Instantaneity The property of a HAT that gives the user an immediate sense of what the end-product will be like. (Most web-authoring tools are extremely bad at this) When creating links in TBM users couldn't use them without changing the screen mode and then became frustrated Poor Instantaneityleads to frustration and tendency not to re-use the HAT . Sub-factors . Description . Observation . Impact of factor . Transparency of operation Operational momentum The summative aspects of a HAT that leads the user on to the next stage, iteratively if necessary When inputting a new concept-map node in SHAPE subjects are not led to create links to other nodes Poor operational momentum leads to longer completion times Logic of operation The degree to which the completion of tasks and sub-tasks whilst using the HAT have an internal logic, so that users can ‘guess’ what action they need to take next Some subjects said that they found TBM confusing and not easy to remember Poor Logic of Operationleads to longer completion times and tendency not to re-use HAT Noise/economy of dialogue The perceived level of unnecessary information or complexity in completing a task or sub-task. The identification of this factor corroborates Kozma's (1992) assertion that complexity of tool must be keep to a minimum TBM displayed unnecessary tool bars and palettes. Some of these could be turned off but compared to minimalism of SHAPE some subjects still said that they were distracted High Noise leads to errors, slowness in completing tasks Mental model match The degree to which the HAT can accommodate a range of user models of what a HAT is and how a HAT should operate Some subjects saw TBM as a word processor but clearly its functioning was not like a word processor Poor mental model match leads to longer learning times External consistency The degree to which a HAT is consistent with other HATs Standard menu items like ‘File’ and ‘Edit’ facilitated subject navigation Poor external consistency leads to longer learning times Internal consistency The degree to which the internal operations/functions are consistent with each other Subjects could create sub maps in SHAPE once they had created the top level map, i.e. they could begin to work with a HAT if all operations follow a similar pattern and successfully guess what to do next Good Internal Consistency results in efficient use of HAT Transparency of purpose Task match The degree to which a HAT is able to produce something that matches the users' expectations. task match is an accepted concept (Eason, 1984) Although subjects could create concept-maps they could only partially see how it related to the task, i.e. they could create maps easily enough but were not sure what use they were Poor task match leads to confusion and slower learning and completion times Instantaneity The property of a HAT that gives the user an immediate sense of what the end-product will be like. (Most web-authoring tools are extremely bad at this) When creating links in TBM users couldn't use them without changing the screen mode and then became frustrated Poor Instantaneityleads to frustration and tendency not to re-use the HAT Open in new tab Table 1 Factors identified in the qualitative data analysis Factor . Description . Observation . Impact of factor . Transparency of operation The summative aspects of an HAT that allow users to find, understand and then use rapidly and easily the functions of the HAT to achieve a task or sub-task Subjects learnt and worked faster with SHAPE but they tended to stall when using TBM. SHAPE was more positively rated in transparency of operation sub-factors than TBM Poor transparency of operation leads to longer completion times for tasks Transparency of purpose The summative aspects of an HAT that supports a user's ‘image’ of the end-product at any point during its use Neither HAT enabled subjects to see the end-product straightforwardly increasing Poor transparency of purpose leads to increased effort and time required to learn and use HATs Accommodation The degree to which an HAT puts a user at ease Subjects wanted to give up learning and using TBM when they became frustrated with its operation, SHAPE was seen to be more accommodating Poor accommodation leads to frustration and to failure to continue to use a HAT Accomplishment The sense imbued in the learner that the HAT has enabled them to author something useful. This is very similar to Shackel's (1986) idea of ‘satisfaction’ Subjects wanted to continue to use SHAPE and were ‘pleased’ that they had created a detailed multi-layered concept-map and corresponding hypermedia Poor accomplishment leads to decreased chance of re-use of HAT Factor . Description . Observation . Impact of factor . Transparency of operation The summative aspects of an HAT that allow users to find, understand and then use rapidly and easily the functions of the HAT to achieve a task or sub-task Subjects learnt and worked faster with SHAPE but they tended to stall when using TBM. SHAPE was more positively rated in transparency of operation sub-factors than TBM Poor transparency of operation leads to longer completion times for tasks Transparency of purpose The summative aspects of an HAT that supports a user's ‘image’ of the end-product at any point during its use Neither HAT enabled subjects to see the end-product straightforwardly increasing Poor transparency of purpose leads to increased effort and time required to learn and use HATs Accommodation The degree to which an HAT puts a user at ease Subjects wanted to give up learning and using TBM when they became frustrated with its operation, SHAPE was seen to be more accommodating Poor accommodation leads to frustration and to failure to continue to use a HAT Accomplishment The sense imbued in the learner that the HAT has enabled them to author something useful. This is very similar to Shackel's (1986) idea of ‘satisfaction’ Subjects wanted to continue to use SHAPE and were ‘pleased’ that they had created a detailed multi-layered concept-map and corresponding hypermedia Poor accomplishment leads to decreased chance of re-use of HAT Open in new tab Table 1 Factors identified in the qualitative data analysis Factor . Description . Observation . Impact of factor . Transparency of operation The summative aspects of an HAT that allow users to find, understand and then use rapidly and easily the functions of the HAT to achieve a task or sub-task Subjects learnt and worked faster with SHAPE but they tended to stall when using TBM. SHAPE was more positively rated in transparency of operation sub-factors than TBM Poor transparency of operation leads to longer completion times for tasks Transparency of purpose The summative aspects of an HAT that supports a user's ‘image’ of the end-product at any point during its use Neither HAT enabled subjects to see the end-product straightforwardly increasing Poor transparency of purpose leads to increased effort and time required to learn and use HATs Accommodation The degree to which an HAT puts a user at ease Subjects wanted to give up learning and using TBM when they became frustrated with its operation, SHAPE was seen to be more accommodating Poor accommodation leads to frustration and to failure to continue to use a HAT Accomplishment The sense imbued in the learner that the HAT has enabled them to author something useful. This is very similar to Shackel's (1986) idea of ‘satisfaction’ Subjects wanted to continue to use SHAPE and were ‘pleased’ that they had created a detailed multi-layered concept-map and corresponding hypermedia Poor accomplishment leads to decreased chance of re-use of HAT Factor . Description . Observation . Impact of factor . Transparency of operation The summative aspects of an HAT that allow users to find, understand and then use rapidly and easily the functions of the HAT to achieve a task or sub-task Subjects learnt and worked faster with SHAPE but they tended to stall when using TBM. SHAPE was more positively rated in transparency of operation sub-factors than TBM Poor transparency of operation leads to longer completion times for tasks Transparency of purpose The summative aspects of an HAT that supports a user's ‘image’ of the end-product at any point during its use Neither HAT enabled subjects to see the end-product straightforwardly increasing Poor transparency of purpose leads to increased effort and time required to learn and use HATs Accommodation The degree to which an HAT puts a user at ease Subjects wanted to give up learning and using TBM when they became frustrated with its operation, SHAPE was seen to be more accommodating Poor accommodation leads to frustration and to failure to continue to use a HAT Accomplishment The sense imbued in the learner that the HAT has enabled them to author something useful. This is very similar to Shackel's (1986) idea of ‘satisfaction’ Subjects wanted to continue to use SHAPE and were ‘pleased’ that they had created a detailed multi-layered concept-map and corresponding hypermedia Poor accomplishment leads to decreased chance of re-use of HAT Open in new tab 2.1.3 Evidence of causality There are a number of strategies to investigate the relationship between categories: searching the index system for data bits which index to two or more categories; identifying instances in the data where indexes to particular categories are in close proximity; and inferencing/observation of links between data bits. In GT, this process is called ‘axial coding’. Tentatively the data suggested that there were causal links between transparency of operation, accommodation and accomplishment, in that the transparency of operation of a HAT leads to a sense of accommodation. A user sensing accommodation will develop a sense of accomplishment. This causal relationship is indicated in Fig. 4 and defines the learnability of the HAT. Table 5 contains some examples of data bits supporting the existence of these causalities. Table 5 Evidential data bits of the sub-factors from study 1 suggesting causality Factors . Examples of annotations supporting category . Transparency of operation No new evidence found Operational momentum “Variety of prompt types helps users to proceed.” “Some programs do have a sense of momentum.” Logic of operation “Logical consistency of the HAT may not correspond to the users' sense of logic.” Noise/economy of dialogue “Complexity reduces momentum.” “It is difficult to make complex applications transparent.” Mental model match “Users seek for a ‘handle’ to understand.” “The need to provide a handle.” External consistency “Finding a handle—looking for an equivalent idea to understand the new HAT.” “Connectedness to other tools.” Internal consistency No new evidence found Hidden structure “The danger of hidden information” “Hidden conditions.” Transparency of purpose “The speed of visibility of result is important” “Any activity undertaken must reflect in the final product.” Task match (relabelled as utility) “Proficiency precedes perceived usefulness.” “Variety of final products expected of HAT.” Instantaneity No new evidence found Accommodation “HAPs must induce confidence.” “Confidence partially dependent on familiarity.” Accomplishment “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” Motivation “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” “Motivation and accomplishment are interrelated.” Factors . Examples of annotations supporting category . Transparency of operation No new evidence found Operational momentum “Variety of prompt types helps users to proceed.” “Some programs do have a sense of momentum.” Logic of operation “Logical consistency of the HAT may not correspond to the users' sense of logic.” Noise/economy of dialogue “Complexity reduces momentum.” “It is difficult to make complex applications transparent.” Mental model match “Users seek for a ‘handle’ to understand.” “The need to provide a handle.” External consistency “Finding a handle—looking for an equivalent idea to understand the new HAT.” “Connectedness to other tools.” Internal consistency No new evidence found Hidden structure “The danger of hidden information” “Hidden conditions.” Transparency of purpose “The speed of visibility of result is important” “Any activity undertaken must reflect in the final product.” Task match (relabelled as utility) “Proficiency precedes perceived usefulness.” “Variety of final products expected of HAT.” Instantaneity No new evidence found Accommodation “HAPs must induce confidence.” “Confidence partially dependent on familiarity.” Accomplishment “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” Motivation “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” “Motivation and accomplishment are interrelated.” Open in new tab Table 5 Evidential data bits of the sub-factors from study 1 suggesting causality Factors . Examples of annotations supporting category . Transparency of operation No new evidence found Operational momentum “Variety of prompt types helps users to proceed.” “Some programs do have a sense of momentum.” Logic of operation “Logical consistency of the HAT may not correspond to the users' sense of logic.” Noise/economy of dialogue “Complexity reduces momentum.” “It is difficult to make complex applications transparent.” Mental model match “Users seek for a ‘handle’ to understand.” “The need to provide a handle.” External consistency “Finding a handle—looking for an equivalent idea to understand the new HAT.” “Connectedness to other tools.” Internal consistency No new evidence found Hidden structure “The danger of hidden information” “Hidden conditions.” Transparency of purpose “The speed of visibility of result is important” “Any activity undertaken must reflect in the final product.” Task match (relabelled as utility) “Proficiency precedes perceived usefulness.” “Variety of final products expected of HAT.” Instantaneity No new evidence found Accommodation “HAPs must induce confidence.” “Confidence partially dependent on familiarity.” Accomplishment “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” Motivation “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” “Motivation and accomplishment are interrelated.” Factors . Examples of annotations supporting category . Transparency of operation No new evidence found Operational momentum “Variety of prompt types helps users to proceed.” “Some programs do have a sense of momentum.” Logic of operation “Logical consistency of the HAT may not correspond to the users' sense of logic.” Noise/economy of dialogue “Complexity reduces momentum.” “It is difficult to make complex applications transparent.” Mental model match “Users seek for a ‘handle’ to understand.” “The need to provide a handle.” External consistency “Finding a handle—looking for an equivalent idea to understand the new HAT.” “Connectedness to other tools.” Internal consistency No new evidence found Hidden structure “The danger of hidden information” “Hidden conditions.” Transparency of purpose “The speed of visibility of result is important” “Any activity undertaken must reflect in the final product.” Task match (relabelled as utility) “Proficiency precedes perceived usefulness.” “Variety of final products expected of HAT.” Instantaneity No new evidence found Accommodation “HAPs must induce confidence.” “Confidence partially dependent on familiarity.” Accomplishment “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” Motivation “Motivation to learn driven by accomplishment/utility—Computer Skill dependent.” “Motivation and accomplishment are interrelated.” Open in new tab Fig. 4 Open in new tabDownload slide Causal model of learnability. Fig. 4 Open in new tabDownload slide Causal model of learnability. 2.2 Study 2: focus group study of the learnability/usability of HATs The value of focus groups in the evaluation of HCIs has been recognised (O'Donnell et al., 1991). O'Donnell et al.'s argument for using focus groups is that it avoids the dangers of introspection, i.e. significant personal idiosyncrasies may result from the accounts given by individual subjects and hence no real insight into the cognitive processes taking place may be elucidated. In O'Donnell's words, focus groups “allow subjects to remind one another of events; they encourage subjects to reconstruct processes; they explore gaps in the subject's thinking; they overcome the ‘not worth mentioning’ problem, and importantly for HCI, they allow new solutions to emerge”. The objective of the focus group study was to ‘triangulate’ the results of the qualitative analysis of the learnability/usability comparison described above and identify any other factors important in learnability. The four focus group participants consisted of HE staff responsible for training academic staff in the use of IT packages, including HATs. The participants had all learnt and used at least one HAT beforehand. The overall procedure of the focus group was for the co-ordinator to explain and demonstrate a range of HATs to the participants; the co-ordinator then invited the participants to response to a series of open questions or ‘cues’. The cues were developed from the analysis of the data from the first study and included references to some of the key categories derived. The HATs used in the focus group reflected each of the main design metaphors used: A book/card metaphor (Asymetrix ToolBook) Two map-based metaphors (Webmapper (Freeman and Ryan, 1997), and SHAPE (Elliott et al., 1996)) A control flow-based HAT (Macromedia Authorware) A file directory-based HAT (Microsoft FrontPage) The focus group session was recorded on audiotape and transcribed. The data was analysed from paper print-outs of the transcripts and was therefore separate from the first study. The paper transcripts were scanned and annotated either with reference to pre-existing categories or, where it was merited, to a new category or idea. 2.2.1 Results The results of the focus group reinforced the tentative causalities suggested in the first study together with the relationship between transparency of purpose, motivation and accomplishment. Table 6 gives examples of evidential data bits. In light of this analysis the model of learnability was modified to include hidden structure and motivation as shown in Fig. 4. The model suggests that the sub-factors of transparency of operation and transparency of purpose contribute to the emergence of these holistic properties. In turn, transparency of operation and transparency of purpose then promote a sense of accommodation. Transparency of operation and transparency of purpose and accommodation lead to a final sense of accomplishment. The sub-factors are more closely associated with properties of the HATs, whereas transparency of operation and transparency of purpose, motivation, accommodation and accomplishment are emergent properties being part of a dynamic between user and HAT. Table 6 Further evidential data bits suggesting causality from study 2 Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “How can you get staff to create at the screen?” Participant 1: “Confidence and value—we can all see the value in using a tool, the students I teach can't see the value in using a word-processor.” Researcher: “How at ease are you with this program?” Participant 2: “When I first came to it I found it frightening—well not frightening—that's the wrong word—it's a bit—I was thinking ‘oh my how am I going to get my head around this.’” Researcher: “How at ease are you with this program?” Participant 1: “when you were demonstrating I felt quite comfortable with what you were doing.” Participant 3: “it is easier because your coming with an idea of what a book is”.” Accommodation and accomplishment Researcher: “…PowerPoint has a high utility because people can see the benefit in using it quickly.” Participant 1: “That's what SHAPE is good at because I can see what I want to produce very quickly. [As for] Authorware—I'm not sure about its utility but I would enjoy using it.” Transparency of purpose and motivation and accomplishment Researcher: “Do you think this program gives you a sense of accomplishment?” Participant 1: “I think you'd get a good one.” Participant 2: “From my point of view as a learner I would be very motivated by this because of the quality of the final product.” Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “How can you get staff to create at the screen?” Participant 1: “Confidence and value—we can all see the value in using a tool, the students I teach can't see the value in using a word-processor.” Researcher: “How at ease are you with this program?” Participant 2: “When I first came to it I found it frightening—well not frightening—that's the wrong word—it's a bit—I was thinking ‘oh my how am I going to get my head around this.’” Researcher: “How at ease are you with this program?” Participant 1: “when you were demonstrating I felt quite comfortable with what you were doing.” Participant 3: “it is easier because your coming with an idea of what a book is”.” Accommodation and accomplishment Researcher: “…PowerPoint has a high utility because people can see the benefit in using it quickly.” Participant 1: “That's what SHAPE is good at because I can see what I want to produce very quickly. [As for] Authorware—I'm not sure about its utility but I would enjoy using it.” Transparency of purpose and motivation and accomplishment Researcher: “Do you think this program gives you a sense of accomplishment?” Participant 1: “I think you'd get a good one.” Participant 2: “From my point of view as a learner I would be very motivated by this because of the quality of the final product.” Open in new tab Table 6 Further evidential data bits suggesting causality from study 2 Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “How can you get staff to create at the screen?” Participant 1: “Confidence and value—we can all see the value in using a tool, the students I teach can't see the value in using a word-processor.” Researcher: “How at ease are you with this program?” Participant 2: “When I first came to it I found it frightening—well not frightening—that's the wrong word—it's a bit—I was thinking ‘oh my how am I going to get my head around this.’” Researcher: “How at ease are you with this program?” Participant 1: “when you were demonstrating I felt quite comfortable with what you were doing.” Participant 3: “it is easier because your coming with an idea of what a book is”.” Accommodation and accomplishment Researcher: “…PowerPoint has a high utility because people can see the benefit in using it quickly.” Participant 1: “That's what SHAPE is good at because I can see what I want to produce very quickly. [As for] Authorware—I'm not sure about its utility but I would enjoy using it.” Transparency of purpose and motivation and accomplishment Researcher: “Do you think this program gives you a sense of accomplishment?” Participant 1: “I think you'd get a good one.” Participant 2: “From my point of view as a learner I would be very motivated by this because of the quality of the final product.” Relationship . Examples of evidential data bits . Transparency of operation and transparency of purpose and accommodation Researcher: “How can you get staff to create at the screen?” Participant 1: “Confidence and value—we can all see the value in using a tool, the students I teach can't see the value in using a word-processor.” Researcher: “How at ease are you with this program?” Participant 2: “When I first came to it I found it frightening—well not frightening—that's the wrong word—it's a bit—I was thinking ‘oh my how am I going to get my head around this.’” Researcher: “How at ease are you with this program?” Participant 1: “when you were demonstrating I felt quite comfortable with what you were doing.” Participant 3: “it is easier because your coming with an idea of what a book is”.” Accommodation and accomplishment Researcher: “…PowerPoint has a high utility because people can see the benefit in using it quickly.” Participant 1: “That's what SHAPE is good at because I can see what I want to produce very quickly. [As for] Authorware—I'm not sure about its utility but I would enjoy using it.” Transparency of purpose and motivation and accomplishment Researcher: “Do you think this program gives you a sense of accomplishment?” Participant 1: “I think you'd get a good one.” Participant 2: “From my point of view as a learner I would be very motivated by this because of the quality of the final product.” Open in new tab In summary, the results: Further support for transparency of operation and transparency of purpose, accommodation, and accomplishment. Further support for the sub-factors, operational momentum, task match (re-labelled as utility), noise/economy of dialogue, mental model match, and hence transparency of purpose and transparency of operation. Identify hidden structure as a new factor. Isolate motivation as an important new factor, as indicated in the example data bits supporting accomplishment in Table 5. Emphasise the importance of separating a knowledge-defining phase from a function-defining phase for ease-of-learnability of HATs. It is interesting to observe that new factors hidden structure and motivation emerged from the data analysis of the focus group but were not apparent in the data from the first study. This may be due to the different perspectives of HAT trainers in comparison with HAT users and learners. Trainers would be more likely to observe the effect of hidden structure on learner errors. Similarly, trainers would be more likely to see the effects of different levels of learner motivation on their efforts to master a HAT. 3 Discussion In this section, an attempt is made to examine how the studies presented illustrate how GT can facilitate the development of well-grounded HCI theory. This section then tries to show how GT-derived theories have greater validity than ones derived from other approaches to HCI research. Finally the grounded model of learnability is compared with theories of learning. 3.1 The value of GT in HCI research The application of GT in the two studies described has led to a model explaining the dynamics of learning for the HATs used in the studies. Clearly the model's validity and usefulness needs to be considered in further studies. The question is: how would the explanatory model resulting from the application of GT differ from those resulting from the application of HCI as science, design science or engineering approaches? The analysis of the quantitative data collected during the studies produced some interesting results to inform the structuring of the qualitative data collection exercise. Nevertheless it is hard to see how designing and executing a study based on scientific method would result in anything beyond detailed observations. Newell and Card's (1985) revision of the scientific method embodied in GOMS and task analysis and based on the theories of cognitive psychology may provide a better insight into learnability than following the classic scientific method of experimentation. A task analysis of the two HATs used in this study might result in a low-level model of the process of authoring hypermedia and a comparison of the efficiency of each HAT but it would not provide much insight into HAT learnability that involved humans. HCI as ‘design science’ refers to a widening body of methods best epitomised by Carroll and Campbell's (1989) task-artefact cycle. The task-artefact cycle commences with a design rationale compiled from psychological claims of what would make an application more learnable or usable. The claims are based on prior experience of building similar or earlier versions of the same application. This initial step is, in principle, the same as the approach taken in the design of the two HATs used in the studies, for example the hypothesis that using concept-maps makes authoring easier. These specific claims are added to a ‘data base’ (referred to by Carroll as the ‘psychology of tasks’) that includes details of all the other experiences and phenomena of using earlier versions of the application. It is from exploring this database that the design for the new system is developed. The new design represents user scenarios embodying the chosen design claims. The new application or ‘artefact’ then becomes the case-material for the next design-iteration of application. The problem with the task-artefact cycle and other design science approaches to HCI is that they do not produce anything that is generalisable—the psychology of tasks, for example, is only really pertinent to the specific application in question. If this study had adopted the task-artefact cycle as an approach, the result would have been a collection of ‘experiences’ pertinent to the next generation of HATs without any deeper understanding of learnability. Recently Carroll (2000) expanded upon his task-artefact cycle and describes how it can be used to generate theory. Carroll uses examples to show that theoretical ‘claims’ can be evaluated as part of the task-artefact cycle with the conclusions helping to enrich wider theory. In this sense design science at best only modifies extant theory, i.e. it is theory building but not theory generating. This contrasts with GT which specifically attempts to generate theory to explain the phenomena to which it has been applied. ‘HCI as engineering discipline’ is manifest in the wide range of user inspection methods (UIMs) developed to help HCI professionals conduct their job of developing and evaluating interfaces. Gray and Salzman (1998) classified UIMs (which they refer to as user evaluation methods UEMs) as expert review, heuristic evaluation, guidelines, expert walkthrough, heuristic walkthrough, guidelines walkthrough, and cognitive walkthrough. Two main groups of UIMs can be identified (Gray and Salzman, 1998): Analytic UIMs examine the intrinsic features of an interface in an attempt to identify those that affect usability (the payoff) in some way: errors, speed of use, difficulty of learning and so forth. The desired mapping for analytic UIMs is from features-to-problems. Empirical UIMs typically begin with payoff measures and attempt to relate them to intrinsic features of the interface that can be changed to eliminate the payoff problem. The desired mapping for empirical UIMs is from problem-to-features. Although it has been shown that performance (payoffs) can be linked to design (or intrinsic) features, this correspondence cannot be assumed and the links must be carefully forged. Thus, UIM definitions imply that applying empirical UIMs to the HATs used in this study would result in a measurement of their efficacy but not an understanding of features causing the problems. Empirical UIMs do not provide any explanatory methodology. Analytic UIMs, on the other hand, may provide some insight into features that might cause problems but the inference to real problems cannot be made reliably or consistently. Apart from expert review, analytical UIMs apply prescribed guidelines to the conduct of evaluations so the opportunities to observe phenomena outside the guidelines are limited. This problem has been identified in the application of cognitive walkthrough (Wharton et al., 1992) where evaluators were observed finding problems beyond the limits of their walkthrough agenda. In general, UIMs are designed to identify problems, not to develop a deeper understanding of phenomena and are therefore suitable for HCI professionals but perhaps not as research tools. Again this contrasts with GT which specifically aims to develop a deeper understanding and is not limited to following a particular investigative agenda, and therefore is more suitable for HCI research. So the application of GT leads to explanatory concepts which are fundamentally different from the other main traditions of HCI investigation, which focus on predictive concepts. HCI as science, based on hypothetico-deductive methodology, leads to fine distinctions or observations which may not be as generalisable as desired. Gray and Salzman's (1998) critique of four scientific ‘experiments’ attempting to compare UIMs highlights the severe problems of applying the scientific method in HCI research. HCI as design science may lead to a body of knowledge about the artefact under investigation albeit not necessarily in a coherent form—at best the knowledge builds up extant theory. HCI as engineering science enables the identification of problems but does not add to the development of a deeper understanding of phenomena. The advantage of GT is that it generates theory explaining the phenomena under investigation. GT may be a better methodology for HCI researchers, whereas other approaches may be better suited for HCI practitioners. The grounded theories that are generated from the use of GT methods may not be immediately practical but can inform the development of practical tools such as UIMs and make them better predictors of performance. GT also has the advantage of not being limited to the scope of the study but can integrate other relevant material. GT could therefore become a mechanism for creating a unified approach to general HCI theory. Although GT is promising as a generic approach in HCI research, a number of criticisms have been made. GT has been criticised for being naı̈ve in the sense that it is a tabula rasa view of inquiry, i.e. without reference to prior theory or concepts. However, Glaser and Strauss (1967) note that the researcher does not approach reality as a tabula rasa but must have a perspective in order to see relevant data and abstract significant categories. Even so GT is further criticised (Haig, 1995) for its lack of clarity in the inductive process of constant comparison—is it enumerative, eliminative, abductive or of some other form? However, prior to Haig's criticism Strauss and Corbin (1990) had gone some way to address how prior theory and concepts are included in the GT research process. 3.2 Relationship of the model of learnability to other learnability models Fig. 5 attempts to map the factors found in this study with Green's cognitive dimensions. The correspondence with Green's dimensions map was found to be surprisingly poor and most of the correspondences shown in Fig. 5 are quite tenuous. Only hidden structure and Green's hidden dependencies, and external and internal consistency and Green's consistency correspond reasonably well. It is probably fair to say that excluding hidden structure and external and internal consistency, the factors found in this study are different to Green's cognitive dimensions. They are different in how they aggregate phenomena into factors; e.g. operational momentum incorporates Green's ‘error-proneness’ but embraces significantly more of the characteristics of a program. The factors are also different in that the phenomena they describe are different, e.g. Green's dimensions do not seem to have equivalence for the emergent properties of accommodation and although there is some correspondence between mental model match and closeness of mapping they mostly address different phenomena. Green's dimensions focus on intrinsic characteristics of the artefact whereas the learnability factors found in this study involve a dynamic between the user and the HAT. The problem with cognitive dimensions is that they are no more than a growing collection of factors originating from separate cognitive science research activities; hence the relation of one dimension to the next is never articulated. The number of dimensions has grown from a few in 1990 to at least 13 today but still without any kind of theory binding them together, hence their meaningfulness is suspect. Green argues that each dimension is orthogonal to the other dimensions, however, it is hard to see how this can be entirely true, e.g. are error-proneness and consistency mutually exclusive? Fig. 5 Open in new tabDownload slide Mapping Green's (1989) cognitive dimensions to factors found in study. Fig. 5 Open in new tabDownload slide Mapping Green's (1989) cognitive dimensions to factors found in study. The factors described by Dix et al. (1998) correspond better to the factors in this study to the extent that this study provides supporting empirical evidence missing from Dix et al. Jordan et al. (1991) refer to guessability which roughly corresponds to operational momentum but in Jordan et al.'s definition it refers to the time taken to “get going with the system” which is vague. Seely Brown (1986) refers to system opacity, which is the opposite of transparency of operation, however, he does not try to breakdown his idea into any greater depth. Shneiderman (1998) refers to the ‘transparency principle’ as that the “user is able to apply intellect directly to the task; the tool itself seems to disappear”. Thus the transparency principle has much in common with transparency of operation found in this study. Shneiderman's ‘transparency’ is defined holistically with no reference to empirical data. The transparency of operation, found in this study, emerged from a bottom-up analysis of the data and is therefore significantly more substantive. There is also some resonance between the factors discovered in this study and Nielsen's (1993) ‘system acceptability model’. However, Nielsen's model is fairly high level concepts representing learnability as ‘easy to learn’, hence it does not have the same level of detail. A more useful comparison is made with Green's cognitive dimensions since each dimension is described in detail and represent lower level concepts commensurate with the level of detail contained in the model of learnability discussed in this paper. Carroll (1998) has recently reviewed the work contained in his influential book on minimalist design of instructions for computer programs ‘the Nurnberg Funnel’ (Carroll 1990). Carroll's work is primarily concerned with the principles of the design of instructions that accompany computer programs rather than a study of the principles of usability. In spite of this, his work resonates with the model of learnability. Carroll reflects that: Our interpretation of our subjects' struggles was that they were actually making rather systematic attempts to think and reason, to engage their prior knowledge and skills, to get something meaningful accomplished. Carroll concludes that systems needed to evoke positive motivation toward meaningful activities and reorients his focus to better supporting, self-initiated sense-making by the user. Hence, encapsulated in his reflections are some of the concepts embodied in the model of learnability factors of transparency of operation and transparency of purpose, accomplishment and motivation and the causal effect of instruction on successful outcome. However, Carroll's work was not manifest in an explicit model, rather it was a set of empirically derived principles of design, not a causal model. The set of learnability attributes described by Garzotto and Matera (1997), although not derived from precise empirical work but from 10 years experience, also has many parallels with the learnability model presented here. 3.3 Relationship of the model of learnability to learning theories An attempt is made here to relate the causal model of learnability to established learning theories in order to underpin it from a theoretical perspective. The model of learnability incorporates two influences on the learnability of HATs: the intrinsic properties transparency of operation and transparency of purpose and the effects these properties have on the user's perceptions of the HATs (accommodation and accomplishment). Motivation, a ‘property’ of the user, is a confounding factor that is affected by the characteristics of HATs but also influences the sense of accomplishment. It is important to note that the model of learnability derived from the two studies was influenced by the instructional strategy used to support the subjects in learning how to use the HATs. The instructional strategy as noted in Section 2.1 above involved (i) the researcher demonstrating each HAT first and then (ii) answering questions during the construction of the simple hypermedia application but not directing the use of the HATs (iii) preventing subjects making notes. If a more interventionist instructional strategy had been employed it would have mitigated against examining the learnability of the HATs. In reference to behaviourist theories, transparency of operation and transparency of purpose and the encompassed sub-factors can be viewed as complex ‘stimuli’; accommodation and accomplishment can be construed as ‘responses’. However, in terms of instrumental or operant conditioning (Skinner, 1938), there is no explicit reinforcement or feedback to the user, i.e. HATs are not intrinsically instructional. The other problem of considering behaviourism is its focus on observable behaviour. When learning a HAT, there is little to observe the learning that is taking place since the kind of learning is conceptual and oriented towards problem solving. Hence, behaviourist learning theory does not offer clear correspondences with the model of learnability presented here. Gagne's (1977) theory is quite pertinent to the model of learnability in that it includes the external conditions necessary for learning to take place. Clearly the model of learnability does not incorporate deliberate instructional events and HATs are not generally designed to be instructional. However, there is no reason why HATs should not be imbued with the characteristics that can promote instructional events. In this sense, there are some direct correspondence with the model of learnability. Fig. 6 shows the correspondences between Gagne's model and the model of learnability. There is some clear correspondence between transparency of operation and transparency of purpose and the first five phases and associated instructional events of Gagne's model. The diagram shows where the individual sub-factors are most salient in each of the phases, but there is overlap and the sub-factors act over the other learning phases encompassed by each grey block. Operational momentum implies a kind of cadence in the use of a HAT corresponding to the instructional event of directing attention. Ensuring that there is a consistent logic to the operation of a HAT corresponds to providing learning guidance. The instructional event of enhancing retention can be promoted if the HAT matches the users' mental models, hidden structures are minimised and users perceive that the HAT is not complex. In the model of learnability, transparency of purpose promotes motivation in the same way that Gagne suggests that ‘activating motivation’ is an instructional event promoting the motivation-learning phase. There is no direct correspondence with accommodation, however, it is the emergent property of transparency of operation and transparency of purpose and therefore it acts over the first five learning phases. accomplishment corresponds to the performance phase and could be construed as supporting the instructional event ‘eliciting performance’. The comparison with Gagne's model indicates that there is a significant correspondence with the model of learnability. Fig. 6 Open in new tabDownload slide Relation between Gagne's phases of learning and the model of learnability. Fig. 6 Open in new tabDownload slide Relation between Gagne's phases of learning and the model of learnability. In addition to Carroll (1990), exponents of constructivist learning theory have criticised the ‘planned’ approach implied by Gagne's theory of learning for being prescriptive and not representative of the actual way knowledge and skills are acquired. Constructivism is pertinent to the model of learnability because it advocates the designing of the learning settings but not the precise steps by which learning will take place. This approach is reflected in the model of learnability in that it defines the characteristics of the HAT that promote learning. However, the model of learnability does not address the important principle of social context in learning included in constructivist learning theory (Vygoskii, 1962; Bandura, 1971). The social context is clearly important and influences motivation, accommodation and accomplishment and perhaps the model needs revising. A complementary idea to the constructivist approach is the concept of ‘scaffolding’ first described by Bruner (1975) and developed by others, (Jonassen, 1996; Jackson et al., 1996). Scaffolding is defined by Jonassen (1996) as: …a cognitive apprenticeship technique in which the instructor performs or supports the performance of parts of the task that the learner is not yet able to perform. When the students are stuck, suggesting a solution or performing that step for them before they give up will help them complete the task and gain confidence to try other problems. There are clear parallels here to the model of learnability. The factors of operational momentum, instantaneity, hidden objects all support performance of the task and lead the user to a better sense of accommodation. In the definition above, the instructor can be replaced by the HAT and the student by the user. There are no direct correspondences between the cognitive processes identified by Lindsay and Norman (1977) and the factors in the model of learnability. However, in order to facilitate these processes when learning a HAT, the details of the HAT must correspond with the existing schema of the user (mental model match). Where restructuring is necessary, i.e. the user has no pre-existing schema with which to understand the HAT (a novice for instance), then the HAT must be having a high level of ‘intrinsic’ transparency of operation so that the user can develop a new schema easily. 4 Conclusion The application of GT in the two studies has resulted in a number of factors emerging which were primarily compared with Green's (1989) cognitive dimensions who, to date, has posited the clearest and most defendable model. Whilst the factors identified using GT in this paper may not be entirely new, the interpretation of the way in which they interact is. This interpretation contrasts with Green's cognitive dimensions, which comprise a selection of apparently unrelated (orthogonal) factors. Green's a priori assumption about orthogonality is difficult to understand and contrasts with another strength of applying the GT approach in that it builds towards a kind of ‘concept-nexus’ from a detailed analysis of the data. The approach of building a ‘theory-nexus’ was first explored by Carroll and Kellog (1989), albeit without the benefit of GT. These factors seem to interact in a causal model of learnability (Fig. 1) grounded in the qualitative data collected during the two studies. Contemporary models do not provide a theoretical framework for learnability, referring to learnability simply in terms of the time required to learn some aspect of an interaction with a computer (Preece, 1994; Shneiderman, 1998; Faulkner, 1998). The model of learnability when compared with other key contemporary models of learnability/usability provides a more complete picture of the dynamics of learning a HAT although its general validity needs to be further investigated in new studies This paper has also compared the model of learnability with contemporary theories of learning. This has provided an underpinning theory of learning to the model particularly in relation to Gagne's (1977) model of the learning process and the concept of scaffolding. Correctly applied, the application of GT leads to explanatory concepts and theories of HCI differing significantly from the predictive outcomes of the other main approaches to HCI research. The application of GT is also evolutionary and integrative in that early theories can be subsumed into new grounded theories whereas the other HCI approaches are bound to extant theory and tend not to evolve beyond the bounds of those theories. References Allwood, 1990 Allwood C. , Cognitive Ergonomics: Understanding, Learning and Designing Human–Computer Interaction Cognitive Ergonomics: Understanding, Learning and Designing Human–Computer Interaction 1990 Academic Press , London OpenURL Placeholder Text WorldCat Ausubel, 1968 Ausubel D. , Educational Psychology: A Cognitive View 1968 Holt Reinhart and Winston , New York Bandura, 1971 Bandura A. , Social Learning Theory 1971 General Learning Press , New York Barnard et al., 1986 Barnard P. Wilson M. Maclean A. , Proceedings of CHI '86 Human Factors in Computing Systems, Boston Proceedings of CHI '86 Human Factors in Computing Systems, Boston 1986 OpenURL Placeholder Text WorldCat Bigge, 1982 Bigge M. , Learning Theories for Teachers 1982 Harper and Row , New York Blumer, 1979 Blumer M. , Concepts in the analysis of qualitative data , Social Review 27 ( 1979 ) 651 677 Google Scholar Crossref Search ADS WorldCat Briggs, 1988 Briggs P. , What we know and what we need to know: the user model versus the user's model in human–computer interaction , Behaviour and Information Technology 7 ( 4 ) 1988 ) 431 442 Google Scholar Crossref Search ADS WorldCat Bruner, 1975 Bruner J. , The ontogenesis of speech acts , Journal of Child Language 2 ( 1975 ) 1 19 Google Scholar Crossref Search ADS WorldCat Bruner and Anglin, 1974 Bruner J. Anglin J. , Beyond the Information Given: Studies in the Psychology of Knowing 1974 Norton , New York Buckingham Shum and McKnight, 1997 Buckingham Shum S. McKnight C. , International Journal of Human Computer Studies 47 ( 1 ) 1997 Card et al., 1983 Card S. Moran T. Newell A. , The Psychology of Human–Computer Interaction 1983 Hillsdale , NJ LEA Carroll, 1990 Carroll J. , The Nurnberg Funnel 1990 MIT Press , Cambridge, MA Carroll, 1998 Carroll J. , Reconstructing minimalism Carroll J. Minimalism Beyond the Nurnberg Funnel 1998 MIT Press , Cambridge, MA 1 18 OpenURL Placeholder Text WorldCat Carroll, 2000 Carroll J. , Making Use 2000 MIT Press , Cambridge, MA Carroll and Campbell, 1986 Carroll J. Campbell R. , Softening up hard science: reply to Newell and Card , Human Computer Interaction 2 ( 1986 ) 227 249 Google Scholar Crossref Search ADS WorldCat Carroll and Campbell, 1989 Carroll J. Campbell R. , Artifacts as psychological theories: the case of human–computer interaction , Behaviour and Information Technology 8 ( 1989 ) 247 256 Google Scholar Crossref Search ADS WorldCat Carroll and Kellog, 1989 Carroll J. Kellog W. , Proceedings of ACM CHI'89 Conference on Human Factors in Computing Systems Proceedings of ACM CHI'89 Conference on Human Factors in Computing Systems 1989 ACM , New York pp. 7–14 OpenURL Placeholder Text WorldCat Davis and Wiedenbeck, 1998 Davis S. Wiedenbeck S. , The effect of interaction style and training method on end user learning of software packages , Interacting with Computers 11 ( 1998 ) 147 172 Google Scholar Crossref Search ADS WorldCat Davis et al., 1989 Davis F. Bagozzi P. Warshaw P. , User acceptance of computer technology: a comparison of two theoretical models , Management Science 35 ( 8 ) 1989 ) 982 1003 Google Scholar Crossref Search ADS WorldCat Dey, 1993 Dey I. , Qualitative Data Analysis 1993 Routledge , London Dix et al., 1998 Dix A. Finlay J. Abowd G. Beale R. , Human–Computer Interaction 1998 Prentice Hall , Europe Eason, 1984 Eason K. , Towards the experimental study of usability , Behaviour and Information Technology 3 ( 2 ) 1984 ) 133 143 Google Scholar Crossref Search ADS WorldCat Eason and Damordaran, 1981 Eason K. Damordaran L. , The needs of the commercial user Coombs M. Alty J. Computer Skills and the User Interface 1981 Academic Press , London OpenURL Placeholder Text WorldCat Elliott, 1999 Elliott, G., 1999. Towards Diagrammatic Hypermedia Authoring: Cognition and Usability Issues in Higher Education. PhD Thesis, Open University. Elliott et al., 1996 Elliott G.J. Jones E. Barker P. , Proceedings of ED-MEDIA '96, Boston, USA Proceedings of ED-MEDIA '96, Boston, USA 1996 OpenURL Placeholder Text WorldCat Elliott et al., 1997 Elliott G. Jones E. Barker P. , Supporting the paradigm shift: hypermedia construction with concept maps—the easy way forward , Innovations in Education and Training International 34 ( 4 ) 1997 ) 294 298 Google Scholar Crossref Search ADS WorldCat Faulkner, 1998 Faulkner C. , The Essence of Human–Computer Interaction 1998 Prentice Hall , London Freeman and Ryan, 1997 Freeman H. Ryan S. , Webmapper: a Tool for Planning, Structuring and Delivering Courseware on the Internet , Proceedings of ED-MEDIA'97—World Conference on Educational Multimedia and Hypermedia, Calgary, Canada 97 ( 1997 ) 372 377 OpenURL Placeholder Text WorldCat Gagne, 1977 Gagne R. , The Conditions of Learning 1977 Holt Rhinehart and Winston , New York Garzotto and Matera, 1997 Garzotto F. Matera M. , A systematic method for hypermedia usability evaluation , The New Review of Hypermedia and Multimedia 3 ( 1997 OpenURL Placeholder Text WorldCat Gilmore, 1991 Gilmore D. , Visibility: a dimensional analysis Diaper D. Hammond N. People and Computers VI 1991 Cambridge University Press , Cambridge OpenURL Placeholder Text WorldCat Glaser and Strauss, 1967 Glaser B. Strauss A. , The Discovery of Grounded Theory: Strategies for Qualitative Research 1967 Aldine Publishing Co , Chicago, IL Gray and Salzman, 1998 Gray W. Salzman M. , Damaged Merchandise? A Review of Experiments That Compare Usability Evaluation Methods , Human–Computer Interaction 13 ( 1998 ) 209 242 OpenURL Placeholder Text WorldCat Green, 1989 Green T. , Cognitive dimensions of notations Sutcliffe A. Macaulay L. People and Computers V 1989 Cambridge University Press , Cambridge 443 460 OpenURL Placeholder Text WorldCat Haig, 1995 Haig B. , Grounded theory as scientific method Nieman A. Philosophy of Education Year Book 1995 OpenURL Placeholder Text WorldCat Jackson et al., 1996 Jackson S. Stratford S. Krajcik J. Soloway E. , A learner-centred tool for students building models , Communications of the ACM 39 ( 4 ) 1996 ) 48 49 Google Scholar Crossref Search ADS WorldCat Jonassen, 1996 Jonassen D. , Computers in the Classroom 1996 Prentice Hall , New Jersey Jones, 1985 Jones S. , The analysis of depth interviews Walker R. Applied Qualitative Research 1985 Gower , London OpenURL Placeholder Text WorldCat Jordan et al., 1991 Jordan P. Draper S. MacFarlane K. McNulty S. , Guessability, learnability and experienced user performance Diaper D. Hammond N. People and Computers VI 1991 Cambridge University Press , Cambridge 237 248 OpenURL Placeholder Text WorldCat Kato, 1986 Kato T. , What ‘question-asking protocols’ can say about the user interface , International Journal of Man–Machine Studies 25 ( 1986 ) 659 673 Google Scholar Crossref Search ADS WorldCat Kellog, 1987 Kellog W. , Conceptual consistency in the user interface: effects on user performance Bullinger H. Shackel B. Proceedings of the Second IFIP Conference on Human–Computer Interaction—INTERACT '87 1987 389 394 Stuttgart, Germany OpenURL Placeholder Text WorldCat Kozma, 1992 Kozma R. , Constructing knowledge with learning tool Kommers P. Jonassen D. Mayes T. Cognitive Tools for Learning 1992 Springer , Berlin 23 32 OpenURL Placeholder Text WorldCat Laurillard, 1993 Laurillard D. Second ed. Rethinking University Teaching 1993 Routledge , London Lindsay and Norman, 1977 Lindsay P. Norman D. , Human Information Processing: An Introduction to Psychology 1977 Academic Press , New York Maas, 1983 Maas S. , Why systems transparency? Green T. Payne S. Van de Veer G. The Psychology of Computer Use 1983 Academic Press , London OpenURL Placeholder Text WorldCat Molnar and Kletke, 1996 Molnar K. Kletke M. , The impacts on the user performance and satisfaction of a voice-based front-end interface for a standard software tool , International Journal of Human–Computer Studies 45 ( 1996 ) 287 303 Google Scholar Crossref Search ADS WorldCat Napier et al., 1989 Napier H. Lane D. Batsell R. , Impact of a restricted natural language interface on ease of learning and production , Communications of the ACM 32 ( 10 ) 1989 ) 1190 1197 Google Scholar Crossref Search ADS WorldCat Newell and Card, 1985 Newell A. Card S. , The Prospects for Psychological Science in Human–Computer Interaction , Human–Computer Interaction 1 ( 1985 ) 209 242 Google Scholar Crossref Search ADS WorldCat Nielsen, 1990 Nielsen J. , Hypertext and Hypermedia 1990 Academic Press , London Nielsen, 1993 Nielsen J. , Usability Engineering 1993 Academic Press , London Nielsen, 1994 Nielsen, J., 1994. Using discount usability engineering to penetrate the intimidation barrier. In: Bias, R.G., Mayhew, D.J. (Eds.), Cost-Justifying Usability, Chapter 11, 270–273, Academic Press. O'Donnell et al., 1991 O'Donnell P. Scobie G. Baxter I. , The use of focus groups as an evaluation technique in HCI Diaper D. Hammond N. People and Computers VI 1991 Cambridge University Press , Cambridge OpenURL Placeholder Text WorldCat Preece, 1994 Preece J. , Human–Computer Interaction 1994 Addison-Wesley , Harlow, England Richards et al., 1991 Richards T. Richards L. , The NUD.IST qualitative data analysis system , Qualitative Sociology 1991 ) 14 OpenURL Placeholder Text WorldCat Richards and Richards, 1994 Richards T. Richards L. , Using hierarchical categories in qualitative data analysis Kelle U. Computers and Qualitative Methodology 1994 Sage , Beverley Hill, CA OpenURL Placeholder Text WorldCat Roberts and Moran, 1983 Roberts T. Moran T. , The evaluation of text editors: methodology and empirical results , Communications of the ACM 26 ( 4 ) 1983 ) 265 283 Google Scholar Crossref Search ADS WorldCat Sasse, 1997 Sasse, A., 1997. Eliciting and Describing Users' Models of Computer Systems. PhD thesis, University of Birmingham. Seely Brown, 1986 Seely Brown J. , From cognitive to social ergonomics and beyond Norman D. Draper S. User Centred Systems Design 1986 Lawrence Erlbaum , London OpenURL Placeholder Text WorldCat Shackel, 1986 Shackel B. , Ergonomics in design for usability Harrison M. Monk A. Proceedings of the British Computer Society Human–Computer Interaction Specialist Group 1986 44 64 OpenURL Placeholder Text WorldCat Shneiderman, 1998 Shneiderman B. , Designing the User Interface 1998 Addison-Wesley , Reading, MA Skinner, 1938 Skinner B. , The Behaviour of Organisms 1938 Appleton , New York Slavin, 1991 Slavin R. , Educational Psychology: Theory into Practice 1991 Prentice Hall , NJ Smith, 1996 Smith P. , Towards a practical measure of hypertext usability , Interacting with Computers 8 ( 4 ) 1996 ) 365 381 Google Scholar Crossref Search ADS WorldCat Strauss and Corbin, 1990 Strauss A. Corbin J. , Basics of Qualitative Data 1990 Sage , London Sutcliffe, 1995 Sutcliffe A. , Human–Computer Interface Design 1995 MacMillan , Basingstoke Tullis, 1986 Tullis T. , Proceedings of the British Computer Society Human–Computer Interaction Specialist Group Proceedings of the British Computer Society Human–Computer Interaction Specialist Group 1986 pp. 604–614 OpenURL Placeholder Text WorldCat Von Glasenfeld, 1984 Von Glasenfeld E. , Radical constructivism Watzlawick P. The Invented Reality 1984 Harvard University , Cambridge MA OpenURL Placeholder Text WorldCat Vygotskii, 1962 Vygotskii L. , Thought and Language 1962 Cambridge MA , MIT Press Wharton et al., 1992 Wharton C. Bradford J. Jeffries R. Franzke M. , Proceedings of the ACM CHI '92 Conference on Human Factors in Computing Systems Proceedings of the ACM CHI '92 Conference on Human Factors in Computing Systems 1992 ACM , New York pp. 381–388 OpenURL Placeholder Text WorldCat Whiteside et al., 1985 Whiteside J. Jones S. Levy P. Wixon D. , Proceedings of CHI '85, San Francisco Proceedings of CHI '85, San Francisco 1985 pp. 185–191 OpenURL Placeholder Text WorldCat Wilson et al., 1990 Wilson M. Barnard P. MacLean A. , Cognitive Ergonomics: Understanding, Learning and Designing Human–Computer Interaction Cognitive Ergonomics: Understanding, Learning and Designing Human–Computer Interaction 1990 Academic Press , London pp. 151–172 OpenURL Placeholder Text WorldCat Elsevier Science B.V. TI - A grounded theory approach to modelling learnability of hypermedia authoring tools JF - Interacting with Computers DO - 10.1016/S0953-5438(02)00021-8 DA - 2002-10-01 UR - https://www.deepdyve.com/lp/oxford-university-press/a-grounded-theory-approach-to-modelling-learnability-of-hypermedia-uGRReZ76ET SP - 547 EP - 574 VL - 14 IS - 5 DP - DeepDyve ER -