Editors’ Note: The final assignment for the Editorial Analysis course of the 2013-14 class of David Yellin College’s Program in Editing and Editorial Analysis entailed the preparation of the following article for publication. In addition to standard content and copy editing, the assignment involved rigorous editorial guidelines, including
- Reducing the text from 6533 to no more than 4000 words,
- Preparing an Abstract of 250 words or less,
- Minimizing the use of block quotes, which were used extensively in the original,
- Converting footnotes to endnotes,
- Formatting the article according to an especially prepared style sheet,
- Embedding hyperlinks in the titles of periodicals and in the names of institutions, companies and governmental bodies mentioned in the text to the homepages of these entities.
The submission that best conformed to these guidelines would be the version of the article to be published here.
Accordingly, the faculty editors are pleased to publish “Information Pathologies of the Internet and Web” as researched and written by Yosef Gotlieb and edited by Shiri Forer, a graduate of the 2013-2014 class.
Dr. Yosef Gotlieb
David Yellin College of Education, Jerusalem, Israel
With approximately a third of the world’s population currently connected to the Internet and the amount of content available on the WWW steadily increasing, this network plays a central role in today’s information society. However, the Web’s constant expansion and its popularity have resulted in persistent difficulties, referred to as Information Pathologies. These can be divided into three sets of issues relating to: (a) System Infrastructure, pertaining to malfunctions of hardware and connectivity issues (b) Content that is of unspecified authorship and characterized by unreliable information and (c) Human and Institutional Factors, relating to personal, institutional, management, and economic burdens deriving from the intensive use of digital technologies. Attempts to overcome these pathologies require an understanding of their roots: the Quantitative, relating to the Web’s rampant growth, link decay and content duplication; and the Structural-Organizational, referring to its lack of structure and organization that produce difficulties in accessing and extracting reliable and relevant information. To avoid stress to people and systems, policymakers will have to favor technologies that enable the effective use of the Web and strengthen human capacity for the judicious and measured use of information.
Keywords: Internet, “information society”, “information workers”, “information overload”, “digital environment”
Correspondence: Yosef Gotlieb, Text and Publishing Studies, School of Continuing Studies, David Yellin College of Education, PO Box 3578, Beit HaKerem, Jerusalem 9103501.
The Internet’s Hidden Challenge
Between the years 2005–2011, the number of people using the Internet doubled, with approximately a third of the world’s population currently connected to the network (ITU, 2012). Further, the amount of data available on the World Wide Web (WWW, the Web) is steadily increasing with over 14 billion pages indexed to date (WorldWideWebSize.com, 2013). The network’s constant expansion has resulted in persistent difficulties, referred to as information pathologies. In the following, I define the nature of these pathologies and their impact.
The term information pathologies is attributed to H.L. Wilensky (1967) and his work on organizational intelligence. Wolfgang Scholl (1999) defined them as “avoidable failures of distributed information processing,” in which “decision-relevant information” is not properly produced or procured, not accurately transmitted, or not accurately “applied in the decision-making process.” Bawden and Robinson (2009) applied the term with reference to the Internet, specifically to the Web.
Descriptions of one of these pathologies, information overload, go back at least as far as the pre-digital world of the sixties, but it has ballooned with the increasing use of the Web (Edmonds and Morris, 2000; Heylighen, 2002; Shenk, 2003; Bawden and Robinson, 2009; Hemp, 2009; Spira and Burke, 2009; Information Overload Research Group, 2012). According to The Economist (2010), information is growing at a 60 percent compounded annual rate, creating an overwhelming burden on human processors and data management systems. According to Basex Inc., a knowledge economy consultancy, information overload alone is a “massive problem” that costs the US business sector $900 billion per year (Spira et al., 2009).
Pathologies and Digital Vulnerabilities
Information overload, although most prevalently described, is not the only pathology that users of the Internet must contend with today. Across the Human-Computer Interface (HCI), three sets of information pathologies can be distinguished: (a) System Infrastructure, (b) Content, and (c) Human and Institutional Factors.
These pertain to malfunctions of hardware and connectivity issues, and can be divided into six subtypes: (1) Disruptions owing to natural disasters – e.g. the outages that occurred after Hurricane Sandy struck Greater New York City in October 2012 (Cowie as quoted in MacManus, 2012); (2) Disruptions owing to hardware or software failures – as occurred on October 26, 2012 when Google Apps Engine™, Tumblr™, and Dropbox™ reported difficulties and a sharp rise in packet loss was observed in North America (Newton, 2012); (3) Disruptions due to accidents – as in February 2012, when two underwater telecom cables were severed, eliminating connections in Africa and the Middle East (Moore, 2012); (4) Cybercrime, Cyberterrorism and Cyberwarfare (Wilson, 2008; Lewis, 2012) – as in the recent attacks on the US banking system (Gorman and Yadron, 2013); (5) The “digital divide” – gaps in Internet accessibility across countries and sectors (Graham, 2011; Warf, 2001; Goodchild, 2000), and; (6) Disparities in interconnectedness within the Web deriving from unidirectional or detached linkages from the Web’s “core” to other areas within it. 
Henzinger et al. (2003) state that “[t]he Web is full of…unreliable, and indeed contradictory content.” Eppler and Muenzenmayer (2002), Knight and Burn (2005), and Parker et al. (2006) also describe quality-related problems. While there is much anecdotal evidence, it is difficult to find studies documenting these problems (Henzinger et al., 2003).
Among the most frequently raised problems are web spam (Najork, 2009), including content duplication across websites, hyperlink decay (“link rot”), which Bar-Yossef, et al. (2004), and Goh and Ng (2007) estimate to be extensive, and the manipulation and inflation of keywords to raise page rank (Beel and Gipp, 2010).
Klein (2002) differentiates between five types of content-related deficiencies: (1) Accuracy (e.g. information discrepancies among sites); (2) Relevance (e.g. irrelevant hits or biased search results); (3) Amount of Data (e.g. too much or too little information); (4) Completeness (e.g. lack of depth or breadth, incomplete data sets); and (5) Timeliness (e.g. information that is not current, data that is reported as available but that is inaccessible).
Parker et al. (2006) have identified a consensus in relevant studies recommending nine variables for measuring content quality: Accessibility, Timeliness, Accuracy, Relevance, Believability, Completeness, Objectivity, Appropriateness, and Representation.
These relate to the damage caused to people and systems, and the institutional and economic burdens deriving from the information society’s intensive use of digital technologies. Two subsets can be distinguished:
Negative Impacts on Individuals
Edmunds and Morris (2000) comment that “an abundance of information, instead of better enabling a person to do their job, threatens to engulf and diminish his or her control over the situation.”
Impacts on Individuals include disorientation, which is defined by Sheard and Ceddia (2004) as “the perception of being lost…in electronic environments.” Munzner and Burchard (1995) explain: “The Web is so interconnected and huge that it is difficult to establish a mental model of its structure.”
Firat and Kuzu (2011) describe disorientation as related to cognitive overload, which Kirsch (2000), a cognitive scientist, attributes to “too much information…constant multi-tasking and interruption…” He writes of contemporary work spaces as being “ecologies saturated with [information] overload” manifested by interruptions of work processes.
Shenk (2003), Bawden and Robinson (2008), Hemp (2009), and others refer to techno-stress and attention deficits as constants for today’s users of information technology, and there is growing evidence suggesting that these stresses contribute to psychological problems, including depression, anxiety, and addictive behavior (Block, 2008).
Impacts on cognition and memory have also been cited. Garr (2008) speculated that Internet use is eroding memory and other cognitive capacities, and a 2011 study reported in Science (Sparrow, Liu, and Wegner) suggests that processes of human memory are adapting to the advent of new computing and communication technology. We are becoming symbiotic with our computer tools, growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.
Physical health issues surrounding Internet use have been discussed widely in the popular press. A scientific publication, the Journal of Medical Internet Research publicizes “studies evaluating the impact of Internet/social media use…on public health, the health care system and policy” (2013).
The Basex report (Spira et al., 2009) notes that 35 percent of information workers suffer physical problems such as carpal tunnel syndrome, eye strain, and related problems. However, Andersen and Mikkelsen (2010) conclude that computer work “is associated with pain problems now and then” but that “the risk of more persistent or chronic disorders is small…” The US Department of Labor Occupational Safety & Health Administration (OSHA, 2012) states that it “has no specific standards that apply to computer workstations…” without reference to known ailments.
Negative Impacts on Organization and Management Systems
The Basex report (Spira et al., 2009) indicates that disruptions from online and other sources consume 28 percent of knowledge workers’ time, translating into 36 billion person-hours per year. Writing in the Harvard Business Review, Paul Hemp (2009) notes that delays in decision-making also derive from the increasingly complex system of information exchanges and have major economic implications.
Other problems pertain to information storage and security, since “ensuring data security and protecting privacy is becoming harder as…information multiplies and is shared ever more widely” (The Economist, 2010).
A five-country survey conducted on behalf of LexisNexis (2010) and a study sponsored by Xerox (Gantz, Boyd and Dowling, 2009) show that demoralization, difficulty concentrating and an increasing amount of time taken to manage information are common complaints among information workers… Executives also suffer from the information flood, which hits them “particularly hard because…[they] so badly need uninterrupted time to synthesize information…and arrive at good decisions” (Dean and Webb, 2011).
In a comprehensive overview of relevant literature, Eppler and Mengis (2003) analyze the negative impacts on management systems as affecting three areas: (a) Information retrieval, organization and analysis; (b) Decision processes; and (c) Communication processes.
In non-business institutional environments, similar effects are noticed. Thomas and Rosenman (2006) state that doctors too are inundated with data flow, and Green (2011) writes that individuals in the medical system suffer due to “the inability to consume all of the pertinent information related to their field,” and that “organizational productivity is affected due to misdirected personnel resources….”
In an equally critical context, the military, Corrin (2010) writes that the armed forces are “quickly reaching a point of information saturation” due to incoming data from multiple sources and sensors. Bates (2010) writes that this “strain[s] the cognitive abilities and time requirements of commanders and their staffs” and that the time required “to uncover the bits relevant to their mission” leaves commanders “with less and less [time] to make timely decisions.”
Attempts to reduce or overcome these pathologies require an understanding of their roots. Kleinberg (1999) writes that the “millions of on-line participants…continuously creating hyperlinked content” result in a “global organization [that] is utterly unplanned.” As he suggests, the WWW is problem-laden due to both its exponential growth – the Quantitative Root, and its spontaneous, dynamic entanglement – the Structural-Organizational Root, and the two are inextricable.
According to the Open Directory Project (2012), as the Web “continues to grow at staggering rates,” the quality of information derived from automated search engines and commercial directory sites will continue to suffer. Indeed, the Web is so big that “even simple statistics about it are unknown” (Henzinger and Lawrence 2004), and billions of web pages defy even automated attempts to filter them. The Semantic Web project (Berners-Lee, Hendler, and Lassila, 2001), aimed at enhancing the intelligibility of human language for computers, is one approach to contend with this situation. However, Börner (2007) writes that only “about 80 percent of data integration and linkage identification might be possible by automated means.” Information overload has become so overwhelming, that if data “used to be scarce and therefore valuable…[w]hat is now scarce, and therefore valuable, is the user’s attention” (Huberman and Wu 2008).
In a seminal talk in the early seventies, the late Nobel Prize laureate Herbert Simon stated that “a wealth of information creates a poverty of attention” (1971). Simon anticipated the dire consequence of overload, predicting that in an “information-rich world, most of the cost [will be] incurred by the recipient,” and progress will lie not in reading and writing information faster or storing more of it, but in extracting patterns “so that far less information needs to be read, written or stored” (ibid).
The Web promotes the free flow of information past borders – physical, political and social. Berners-Lee, the progenitor of the WWW, and his associates (2001) express this creed when they write: “Decentralization requires compromises: the Web had to throw away the ideal of total consistency…allowing unchecked exponential growth.” Yet, that very growth fuels the quantitative root of the Web quagmire. The Web’s expansiveness must be addressed along with the second root of information pathologies, its lack of structure and organization.
As portrayed by Berners-Lee, the “essential property of the World Wide Web is its universality” (ibid). However, as the Members of the Clever Project  (1999) asked, if web pages can be written by individuals of any background or motivation, “[h]ow, then, can one extract from this digital morass high-quality, relevant…information?” They recommended a technique that is still considered useful: hyperlink analysis. However, such analysis is conducted using search engines, which are replete with difficulties, such as commercial or popularity bias (Henzinger, 2007), and in which “structures and relationships [within and among documents] are usually completely hidden” (Dodge, 2005).
Levene (2000) describes another major issue, navigation problems: “when following links users tend to become disoriented” as there is no coordinate system by which they can differentiate between sites on the basis of accessibility, timeliness, accuracy, credibility, completeness, or relevance to their search.
The Internet plays a central role in the Information Society, and is regarded by governments as critical infrastructure (Homeland Security, 2013). However, the WWW does not serve as a model for productive digital networking (Scholl, 1999). To avoid stress to people and systems, policymakers must favor technologies that can expedite decision-relevant information in a manner that enables its effective use and strengthens human capacity for the judicious and measured use of information.
Note on Contributor
Dr. Yosef Gotlieb directs the Program in Text and Publishing Studies at David Yellin College of Education in Jerusalem. He is the founder and co-editor of The 21st Century Text.
 The Web structure has been modeled as consisting of strongly connected components (SCCs) and weakly connected components (WCCs) in the CORE, IN, OUT and Tendril aspects of the Web (Easley and Kleinberg, 2010, Donato et al., 2005, Broder et al., 2000, Nature, 2000); there are varying degrees of accessibility and connectivity between them and the directionality of connections between them is frequently asymmetric. While another study (Carmi et al., 2007) suggests that the Web has a less fragmented topology, how these discontinuities and asymmetries impact on searchability, navigability and accessibility to Web resources has not yet been adequately studied.
 Soumen Chakrabarti, Byron Dom, David Gibson, Jon M. Kleinberg, S. Ravi Kumar, Prabhakar Raghavan, Sridhar Rajagopalan, and Andrew Tomkins.
List of References
J. H. Andersen and S. Mikkelsen, “Does Computer Use Pose a Hazard for Future Long-Term Sickness Absence?” Journal of Negative Results in BioMedicine 9: 1(2010) <http://www.jnrbm.com/content/9/1/1> Accessed December 20, 2012.
Z. Bar-Yossef, A. Z. Broder, R. Kumar, and A. Tomkins. “Sic Transit Gloria Telae: Towards an Understanding of the Web’s Decay,” paper presented at WWW 2004 (New York, May 17–22, 2004). Accessed January 23, 2013.
C.T. Bates. The Battle of Cognition against the Tyranny of Information Overload, Report of the Joint Military Operations Department (Newport, Rhode Island: Naval War College, 2010) <http://www.dtic.mil/dtic/tr/fulltext/u2/a525227.pdf> Accessed December 20, 2012.
D. Bawden and L. Robinson, “The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies,” Journal of Information Science 35: 2 (2009) 180–191.
J. Beel and B. Gipp, “Academic Search Engine Spam and Google Scholar’s Resilience Against it,” The Journal of Electronic Publishing 13: 3 (2010) <http://quod.lib.umich.edu/j/jep/3336451.0013.305?rgn=main;view=fulltext> Accessed January 22, 2012.
T. Berners-Lee, J. Hendler, and O. Lassila, “The Semantic Web,” Scientific American (May 17, 2001) 29–37.
J.A Block, “Issues for DSM-V: Internet Addiction,” American Journal of Psychiatry 165: 3 (2008) 306–307.
K. Börner, “Scholarly Knowledge and Expertise: Collecting, Interlinking, and Organizing What We Know and Different Approaches to Mapping (Network) Science,” Environment and Planning B: Planning and Design 34 (2007) 808–825.
A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins, J. Wiener, “Graph Structure in the Web,” Computer Networks 33: 1–6 (2000) 309–320.
S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E. Shir, “A Model of Internet Topology Using K-Shell Decomposition,” Proceedings of the National Academy of Sciences 104: 27 (2007).11150–11154.
A. Corrin, “Sensory Overload: Military Is Dealing with a Data Deluge,” Defense Systems (February 4, 2010) <http://defensesystems.com/articles/2010/02/08/home-page-defense-military-sensors.aspx> Accessed December 20, 2012.
D. Dean and C. Webb, “Recovering from Information Overload,”McKinsey Quarterly (January, 2011) <http://www.mckinseyquarterly.com/Recovering_from_information_overload_2735> Accessed January 23, 2013.
M. Dodge, Information Maps: Tools for Document Exploration. CASA Working Paper 94 (London: The Bartlett Centre for Advanced Spatial Analysis, University College of London, 2005).
D. Donato, S. Leonardi, S. Millozzi, and P. Tsaparas, “Mining the Inner Structure of the Web Graph,” Paper presented at 8th International Workshop on the Web and Databases (WebDB) (Baltimore, June 16–17, 2005).
D. Easley and J. Kleinberg, The Structure of the Web. Networks, Crowds, and Markets: Reasoning about a Highly Connected World (Cambridge, UK: Cambridge University Press, 2010).
Economist, The, “All Too Much,” The Economist. (February 25, 2010) <http://www.economist.com/node/15557421> Accessed December 17, 2012.
A. Edmunds and A. Morris, “The Problem of Information Overload in Business Organizations: A Review of the Literature,”International Journal of Information Management 20 (2000) 17–28.
M. Eppler and J. Mengis, A Framework for Information Overload Research in Organizations: Insights from Organization Science, Accounting, Marketing, MIS, and Related Disciplines, Paper No. 1/2003 (September 2003). Facoltà di Scienze della Comunicazione, Instituto per la Comunicazione Aziendale, Università della Svizzera Italiana.
M. J. Eppler and P. Muenzenmayer, “Measuring Information Quality in the Web Context: A Survey of State-of-the-Art Instruments and an Application Method,” Proceedings of the Seventh International Conference on Information Quality (Cambridge, Massachusetts, 2002). <http://mitiq.mit.edu/ICIQ/Documents/IQ%20Conference%202002/Papers/MeasureInfoQualityinTheWebContext.pdf> Accessed December 19, 2012.
M. Firat and A. Kuzu, “Semantic Web for E-Learning Bottlenecks: Disorientation and Cognitive Overload,” International Journal of Web and Semantic Technology 2: 4 (2011) 55–65.
J. Gantz, A. Boyd, and S. Dowling, “Cutting the Clutter: Tackling Information Overload at the Source” (Framingham: Massachusetts: IDC, sponsored by the Xerox Corporation, 2009) <http://www.xerox.com/assets/motion/corporate/pages/programs/information-overload/pdf/Xerox-white-paper-3-25.pdf> Accessed on December 20, 2012.
N. Garr, “Is Google Making Us Stupid? What the Internet is Doing to Our Brains,” The Atlantic (July–August 2008). <http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/> Accessed December 19, 2012.
D.H. Goh and Ng, P.K., “Link Decay in Leading Information Science Journals,” Journal of the American Society for Information Science and Technology 58: 1 (2007) 15–24.
M. Goodchild, “Communicating Geographic Information in a Digital Age.”Annals of the Association of American Geographers 90: 2 (2000) 344–55.
S. Gorman and D. Yadron, “Banks Seek U.S. Help on Iran Cyberattacks,”The Wall Street Journal Europe Edition (January 16, 2013) <http://online.wsj.com/article/SB10001424127887324734904578244302923178548.html> Accessed January 20, 2013.
M. Graham, “Time Machines and Virtual Portals: The Spatialities of the Digital Divide,” Progress in Development Studies 11 (2011) 211–227.
A. Green, 2011, “Information Overload in Healthcare Management: How the READ Portal Is Helping Healthcare Managers,”Journal of the Canadian Health Libraries Association 32 (2011) 173–176.
<http://pubs.chla-absc.ca/doi/pdf/10.5596/c11-041> Accessed December 20, 2012.
P. Hemp, “Death by Information Overload,”Harvard Business Review (September, 2009)
<http://hbr.org/2009/09/death-by-information-overload/ar/1> Accessed on January 23, 2013.
M. Henzinger, “Search Technologies for the Internet” Science 317: 5837 (2007) 468–471.
M. Henzinger and S. Lawrence, Extracting Knowledge from the World Wide Web. Proceedings of the National Academy of Sciences 101: Supplement 1 (April 6, 2004) <http://www.pnas.org/content/101/suppl_1/5186.full.pdf>
M. Henzinger, R. Motwani, and C. Silverstein, “Challenges in Web Search Engines,”Paper presented at the Eighteenth International Joint Conference on Artificial Intelligence (Acapulco, Mexico, August 9–15, 2003).
F. Heylighen, “Complexity and Information Overload in Society: Why Increasing Efficiency Leads to Decreasing Control,” (2002) <http://pespmc1.vub.ac.be/Papers/Info-Overload.pdf> Accessed January 3, 2013.
Homeland Security, US Department of. Information Technology Sector, National Infrastructure Protection Plan, (Washington, D.C., US Department of Homeland Security, 2013) <http://www.dhs.gov/xlibrary/assets/nPagesd/nPagesd-ip-information-technology-snapshot-2011.pdf> Accessed January 21, 2013.
A. Huberman and F. Wu, “The Economics of Attention: Maximizing User Value in Information-Rich Environments,” Advances in Complex Systems 11: 4, 487–496.
(ITU), International Telecommunications Union, “Individuals Using the Internet per 100 Inhabitants, 2001–2011 (Table),” ITU World Telecommunication /ICT Indicators Database (2012).
A. Kirsch, “A Few Thoughts on Cognitive Overload,”Intellectica 1:30 (2000) 19–51. <http://www.interruptions.net/literature/Kirsh-Intellectica00-30.pdf> Accessed December 19, 2012.
B.D. Klein, “When Do Users Detect Information Quality Problems on the World Wide Web,” Paper presented at Human-Computer Interaction Studies in MIS. Eighth Americas Conference on Information Systems (Dallas, Texas, August 9–11, 2002) <http://sighci.org/amcis02/RIP/Klein.pdf> Accessed January 23, 2013.
J. M. Kleinberg, “Authoritative Sources in a Hyperlinked Environment,” Journal of the ACM 46: 5 (1999) 604–632.
S. Knight and J. Burn, “Developing a Framework for Assessing Information Quality on the World Wide Web,” Informing Science Journal 8 (2005) 159–174 <http://inform.nu/Articles/Vol8/v8p159-172Knig.pdf> Accessed December 19, 2012.
M. Levene, “The Navigation Problem in the World-Wide-Web,” Paper presented at the 24th Annual Conference of the German Classification Society (Passau, Germany, March 2000) <http://www.dcs.bbk.ac.uk/~mark/download/gfkl_web.pdf> Accessed on Jan. 3, 2013.
J.A. Lewis and J. Andrew, Significant Cyber Incidents Since 2006, Center for Strategic and International Studies <http://csis.org/program/significant-cyber-events> Accessed December 17, 2012.
LexisNexis, 2010. “International Workplace Productivity Survey White Collar Highlights,” (Place unknown: LexisNexis, 2010). http://www.multivu.com/players/English/46619-LexisNexis-International-Workplace-Productivity-Survey/flexSwf/impAsset/document/34ef84f1-beaa-4a48-98c5-0ea93ceae0cb.pdf Accessed January 3, 2013.
C. MacManus, “See Hurricane Sandy’s Impact on the Internet,” CNET (November 2, 2012) <http://news.cnet.com/8301-17938_105-57544032-1/see-hurricane-sandys-impact-on-the-internet/> Accessed January 22, 2013.
Medical Internet Research, The Journal of, “Focus and Scope,” <http://www.jmir.org/about/editorialPolicies#focusAndScope> Accessed December 19, 2012.
Members of the C. Project (S. Chakrabarti, B. Dom, S. R. Kumar, P. Raghavan, S. Rajagopalan, A. Tomkins, J. M. Kleinberg, and D. Gibson), “Hypersearching the Web,” Scientific American, June (1999) 54–60.
S. Moore, “Ship Accidents Sever Data Cables off East Africa,”Wall Street Journal (February 28, 2012) <http://online.wsj.com/article/SB10001424052970203833004577249434081658686.html> Accessed November 26, 2012.
T. Munzner and P. Burchard, “Visualizing the Structure of the World Wide Web in 3D Hyperbolic Space,” Proceedings of the Virtual Reality Modelling Language, special issue of Computer Graphics, 33–38 (Los Angeles: August 6–11, 1995).
M. Najork, “Web Spam Detection,” (Mountain View, California: Microsoft Research, 2009). <http://research.microsoft.com/pubs/102938/eds-webspamdetection.pdf> Accessed December 24, 2012.
Nature, “The web is a Bow Tie,” Nature 405: 113 (May 11, 2000). <http://www.nature.com/nature/journal/v405/n6783/full/405113a0.html> Accessed December 17, 2012.
C. Newton, “Outages Hit Google App Engine, Dropbox, Tumblr, and More,”CNET (October 26, 2012). <http://news.cnet.com/8301-1023_3-57541195-93/outages-hit-google-aPages-engine-dropbox-tumblr-and-more/> Accessed December 17, 2012.
Occupational Safety & Health Administration, US Department of Labor, Computer Workstations (Washington, D.C.: Occupational Safety & Health Administration, US Department of Labor, 2012). <http://www.osha.gov/SLTC/computerworkstation/index.html> Accessed December, 2012.
M.B. Parker, V. Moleshe, C. De la Harpe, and G.B. Wills, “An Evaluation of Information Quality Frameworks for the World Wide Web,” Paper presented at the 8th Annual Conference on WWW Applications (Bloemfontein, Free State Province, South Africa, September 6–8, 2006). <http://www.researchgate.net/publication/39994329_An_evaluation_of_Information_quality_frameworks_for_the_World_Wide_Web> Accessed December 19, 2012.
A. Shenk, “Concept of Information Overload,” in D. H. Johnston, ed., Encyclopedia of International Media and Communications, Vol. 2 (Amsterdam: Elsevier Science, 2003). <http://www.academia.edu/1059861/Information_Overload_Concept_of> Accessed January 3, 2013.
W. Scholl, “Restrictive Control and Information Pathologies in Organizations,”Journal of Social Issues 55: 1 (1999) 101–118.
J. Sheard and J. Ceddia, “Conceptualization of the Web and Disorientation” (Lismore NSW, Australia: Southern Cross University, 2004). <http://ausweb.scu.edu.au/aw04/papers/refereed/ceddia/paper.html> Accessed January 4, 2013.
H. Simon, “Designing Organizations for an Information-Rich World,” in M. Greenberger, ed., Computers, Communications, and the Public Interest (Baltimore: The Johns Hopkins Press, 1971).
A. Sparrow, J. Liu, and D. M. Wegner, “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips,” Science 333: 6043 (2011) 776–778 <http://www.wjh.harvard.edu/~wegner/pdfs/science.1207745.full.pdf> Accessed on January 4, 2013.
J.B. Spira and C. Burke, “Intel’s War on Information Overload: A Case Study,” (New York: Basex, 2009). <http://bsx.stores.yahoo.net/inwaroninov.html> Accessed July 23, 2012.
S. M. Thomas and D. J. Rosenman, 2006, “Information Overload,” The Hospitalist (March, 2006) <http://www.the-hospitalist.org/details/article/255775/Information_Overload.html> Accessed December 20, 2012.
A. Warf, “Segueways into Cyberspace: Multiple Geographies of the Digital Divide,” Environment and Planning B 28: 1 (2001), 3–19.
H.L. Wilensky, Organizational Intelligence: Knowledge and Policy in Government and Industry (New York: Basic Books, 1967).
A. Wilson, Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress, Report for Congress (Washington, D.C.: Congressional Research Service Report for Congress, 2008). <http://www.dtic.mil/dtic/tr/fulltext/u2/a477642.pdf> Accessed December 17, 2012.
WorldWideWebSize (2013) <http://www.worldwidewebsize.com/> Accessed January 21, 2013.