Like me! Analyzing the 2012 presidential candidates’ Facebook pagesJenny Bronstein
2013 Online Information Review
doi: 10.1108/OIR-01-2013-0002
Purpose – The present study aims to report the findings of a qualitative and quantitative content analysis of the Facebook pages of the two presidential candidates. Design/methodology/approach – The sample contained 513 posts collected during the last three months of the 2012 US presidential election. The analysis of the candidates’ pages consisted of three phases: the identification of the different elements of the Aristotelian language of persuasion, the identification of the subjects that appear on the posts, and the identification of additional roles that the Facebook pages play in the campaigns. Findings – Findings show that both candidates used an emotional and motivational appeal to create a social capital and to present a personal image that revealed very little of their personal lives. Statistical analysis shows the numbers of comments and likes given to the posts were influenced by the element of persuasion used on the posts. Results show that campaigns wanted to retain control of the message displayed on the pages by posting information on a small number of non‐controversial subjects. Finally, the content analysis revealed that the Facebook pages were used for fund‐raising purposes, and for the mobilization of supporters. The Facebook pages of both candidates present an alternative way to do politics called fandom politics that is based not on logic or reason but on the affective sensibility of the audiences, discouraging dissent and encouraging affective allegiances between the candidate and his supporters. Originality/value – This study presents an innovative way of analyzing the use of social media sites as a tool for the dissemination of political information and reveals utilization of these media for the creation of social and economic capital by politicians.
Social media and scholarly readingCarol Tenopir; Rachel Volentine; Donald W. King
2013 Online Information Review
doi: 10.1108/OIR-04-2012-0062
Purpose – The purpose of this paper is to examine how often university academic staff members use and create various forms of social media for their work and how that use influences their use of traditional scholarly information sources. Design/methodology/approach – This article is based on a 2011 academic reading study conducted at six higher learning institutions in the United Kingdom. Approximately 2,000 respondents completed the web‐based survey. The study used the critical incident of last reading by academics to gather information on the purpose, outcomes, and values of scholarly readings and access to library collections. In addition, academics were asked about their use and creation of social media as part of their work activities. The authors looked at six categories of social media – blogs, videos/YouTube, RSS feeds, Twitter feeds, user comments in articles, podcasts, and other. This article focuses on the influence of social media on scholarly reading patterns. Findings – Most UK academics use one or more forms of social media for work‐related purposes, but creation is less common. Frequency of use and creation is not as high as might be expected, with academics using or creating social media occasionally rather than regularly. There are some differences in use or creation based on demographic factors, including discipline and age. The use and creation of social media does not adversely affect the use of traditional scholarly material, and high frequency users or creators of social media read more scholarly material than others. Originality/value – This paper illustrates that academics who are engaged with traditional materials for their scholarly work are also embracing various forms of social media to a higher degree than their colleagues. This suggests that social media tools could be a good addition to traditional forms of scholarly content as a way to promote academic growth. Social media is not replacing traditional scholarly material, but rather is enhancing their use.
A comparative analysis of the search feature effectiveness of the major English and Chinese search enginesJin Zhang; Wei Fei; Taowen Le
2013 Online Information Review
doi: 10.1108/OIR-07-2011-0099
Purpose – The purpose of this paper to investigate the effectiveness of selected search features in the major English and Chinese search engines and compare the search engines’ retrieval effectiveness. Design/approach/methodology – The search engines Google, Google China, and Baidu were selected for this study. Common search features such as title search, basic search, exact phrase search, PDF search, and URL search, were identified and used. Search results from using the five features in the search engines were collected and compared. One‐way ANOVA and regression analysis were used to compare the retrieval effectiveness of the search engines. Findings – It was found that Google achieved the best retrieval performance with all five search features among the three search engines. Moreover Google achieved the best webpage ranking performance. Practical implications – The findings of this study improve the understanding of English and Chinese search engines and the differences between them in terms of search features, and can be used to assist users in choosing appropriate and effective search strategies when they search for information on the internet. Originality/value – The original contributions of this paper are that the Chinese and English search engines in both languages are compared for retrieval effectiveness. Five search features were evaluated, compared, and analysed in the two different language environments by using the discounted cumulative gain method.
E‐commerce websites for developing countries – a usability evaluation frameworkLayla Hasan; Anne Morris; Steve Probets
2013 Online Information Review
doi: 10.1108/OIR-10-2011-0166
Purpose – The purpose of this paper is to develop a methodological usability evaluation approach for e‐commerce websites in developing countries. Design/methodology/approach – A multi‐faceted usability evaluation of three Jordanian e‐commerce websites was used, where three usability methods (user testing, heuristic evaluation and web analytics) were applied to the sites. Findings – A four‐step approach was developed to facilitate the evaluation of e‐commerce sites, mindful of the advantages and disadvantages of the methods used in identifying specific usability problems. Research limitations/implications – The approach was developed and tested using Jordanian users, experts and e‐commerce sites. The study compared the ability of the methods to detect problems that were present, however, usability issues not present on any of the sites could not be considered when creating the approach. Practical implications – The approach helps e‐commerce retailers evaluate the usability of their websites and understand which usability method(s) best matches their need. Originality/value – This research proposes a new approach for evaluating the usability of e‐commerce sites. A novel aspect is the use of web analytics (Google Analytics software) as a component in the usability evaluation in conjunction with heuristics and user testing.
A longitudinal study of HotMap web searchOrland Hoeber
2013 Online Information Review
doi: 10.1108/OIR-09-2011-0153
Purpose – HotMap web search was designed to support exploratory search tasks by adding lightweight visual and interactive features to the commonly used list‐based representation of web search results. Although laboratory user studies are the most common method for empirically validating the utility of information visualization and information retrieval systems such as this, it is difficult to determine if such studies accurately reflect the tasks of real users. This paper aims to address these issues. Design/methodology/approach – A longitudinal user evaluation was conducted in two phases over a ten‐week period to determine how this novel web search interface was being used and accepted in real‐world settings. Findings – Although the interactive features were not used as extensively as expected, there is evidence that the participants did find them useful. Participants were able to refine their queries easily, although most did so manually. Those that used the interactive exploration features were able to effectively discover potentially relevant documents buried deep in the search results list. Subjective reactions regarding the usefulness and ease‐of‐use of the system were positive, and more than half of the participants continued to use the system even after the study ended. Originality/value – As a result of conducting this longitudinal study, the author has gained a deeper understanding of how a particular visual and interactive web search interface is being used in the real world, as well as issues associated with resistance to change. These findings may provide guidance for the design, development, and study of next generation interfaces for online information retrieval.
Keyword stuffing and the big three search enginesHerbert Zuze; Melius Weideman
2013 Online Information Review
doi: 10.1108/OIR-11-2011-0193
Purpose – The purpose of this research project was to determine how the three biggest search engines interpret keyword stuffing as a negative design element. Design/methodology/approach – This research was based on triangulation between scholar reporting, search engine claims, SEO practitioners and empirical evidence on the interpretation of keyword stuffing. Five websites with varying keyword densities were designed and submitted to Google, Yahoo! and Bing. Two phases of the experiment were done and the response of the search engines was recorded. Findings – Scholars have indicated different views in respect of spamdexing, characterised by different keyword density measurements in the body text of a webpage. During both phases, almost all the test webpages, including the one with a 97.3 per cent keyword density, were indexed. Research limitations/implications – Only the three biggest search engines were considered, and monitoring was done for a set time only. The claims that high keyword densities will lead to blacklisting have been refuted. Originality/value – Websites should be designed with high quality, well‐written content. Even though keyword stuffing is unlikely to lead to search engine penalties, it could deter human visitors and reduce website value.
Do natural language search engines really understand what users want? A comparative study on three natural language search engines and GoogleNadjla Hariri
2013 Online Information Review
doi: 10.1108/OIR-12-2011-0210
Purpose – The main purpose of this research is to determine whether the performance of natural language (NL) search engines in retrieving exact answers to the NL queries differs from that of keyword searching search engines. Design/methodology/approach – A total of 40 natural language queries were posed to Google and three NL search engines: Ask.com, Hakia and Bing. The first results pages were compared in terms of retrieving exact answer documents and whether they were at the top of the retrieved results, and the precision of exact answer and relevant documents. Findings – Ask.com retrieved exact answer document descriptions at the top of the results list in 60 percent of searches, which was better than the other search engines, but the mean value of the number of exact answer top list documents for three NL search engines (20.67) was a little less than Google's (21). There was no significant difference between the precision for Google and three NL search engines in retrieving exact answer documents for NL queries. Practical implications – The results imply that all NL and keyword searching search engines studied in this research mostly employ similar techniques using keywords of the NL queries, which is far from semantic searching and understanding what the user wants in searching with NL queries. Originality/value – The results shed light into the claims of NL search engines regarding semantic searching of NL queries.
Users’ experiences and perceptions on using two wiki platforms for collaborative learning and knowledge managementSamuel Kai Wah Chu; Felix Siu; Michael Liang; Catherine M. Capio; Wendy W.Y. Wu
2013 Online Information Review
doi: 10.1108/OIR-03-2011-0043
Purpose – This study aims to examine users’ experiences and perceptions associated with the use of two wiki variants in the context of collaborative learning and knowledge management in higher education. Design/methodology/approach – Participants included two groups of postgraduate students from a university in Hong Kong who used MediaWiki ( n =21) and TWiki ( n =16) in completing course requirements. Using a multiple case study approach and a mixed methods research design, data logs on the wiki platforms were downloaded and the contents were analysed. Students’ perceptions were examined through a survey. Findings – The findings indicate that both wikis were regarded as suitable tools for group projects, and that they improved group collaboration and work quality. Both wikis were also viewed as enabling tools for knowledge construction and sharing. Research limitations/implications – This study provides insights that may inform the decisions of educators who are considering the use of wikis in their courses as a platform to enhance collaborative learning and knowledge management. Originality/value – Previous research has shown that wikis can be effectively used in education. However there are a number of wiki variants and it may be difficult to identify which variant would be the best choice. There is a dearth of research comparing the effectiveness of different types of wikis. This study compares two wiki variants on a number of outcomes which may provide some insights to teachers who are in the process of selecting an appropriate wiki for teaching and learning.
ProQuest's Graduate Education Program (GEP) – a powerful, free database and software package for LIS educators and students worldwidePeter Jacso
2013 Online Information Review
doi: 10.1108/OIR-04-2013-0068
Purpose – The purpose of this paper is to highlight the main features of the components of Proquest's giga database package for LIS faculty and students of databases and software services. These allow complimentary access to hundreds of indexing/abstracting, directory and full‐text databases, to RefWorks, the most sophisticated reference management program, and to SUMMON, a powerful digital resource discovery program. Design/methodology/approach – This phase of the research focused on evaluating the largest module of GEP which offers 41 databases with more than 200 million records (half of them full‐text documents), on the new ProQuest software platform. The paper presents the major content and software features of this module. Findings – The single module, GEP‐41, is an important contribution to LIS education, providing free access to LIS faculty and librarians to so many databases covering the LIS and LIS‐related fields, including the new ProQuest Library and Information Science database with more than 1.2 million items. The other modules of GEP extend the coverage to databases appropriate to LIS faculty and students interested in various tracks of librarianship. This project certainly will benefit Proquest itself in the long run. From the perspective of the primary beneficiaries, the LIS professors and students, the rich infrastructure for this project offers unprecedented opportunities for a digital renaissance in every aspect of LIS education and research. Originality/value – This service, highly relevant for LIS education worldwide, was released in late 2012, and research papers have not been published about it yet. The paper focuses on the measurable, quantitative traits of the largest component of the service.