ResearchScapes

Discussions on the art and craft of research

Page 2 of 4

The Rise and Fall of Authority, or, is Wikipedia an Encyclopedia, or Stone Soup?

Despite the fact that Wikipedia was born almost two decades ago, despite the fact that many libraries (mine included) have cancelled all other print and digital general encyclopedias and use it by preference, despite the fact that an increasing number of academics have actually found interesting uses for it within their classrooms – Wikipedia remains controversial. There are of course questions about bias and accuracy in any crowd-sourced site. But a short look into the history of encyclopedic works should alleviate some fears.

Wikipedia first came into being in 2001. The Internet itself had already grown beyond the “primordial swamp” that Paul Evans Peters called it in 1990 (Discussion at Institute on Collection Development for the Electronic Library. April 29-May 2, 1990,) but it was still a place that held a wild mix of legitimate, questionable, and not-so-legitimate sources. Graphical user interfaces were relatively new, search engines were unsophisticated, and there was little consistency in who was making digital materials available, and what it was they were offering the public.
wikipediaImage
To complicate things, the wiki platform confused many people in the academic world. Wikipedia was created by what seemed to be a world-wide group of interested readers, readers that might or might not have any recognized authority about what they wrote. This made Wikipedia seem amateurish and intellectually suspect.

To put it very simply, Wikipedia seemed to have little claim to any intellectual authority. The term “crowdsourcing” had not yet been coined; to the serious eye, Wikipedia was based on unvetted volunteerism. It was a kind of “stone soup,” where people were adding, trading off, editing each other, reporting inappropriate posts, always always creating something with no obvious recipe.

Wikipedia’s main competition, of course, was the venerable Encyclopedia Britannica.

Photo by Valentin on Unsplash

Photo by Valentin on Unsplash

Between 1768-1771, the first edition of the Encyclopedia Britannica was compiled in Edinburgh and published in three volumes. As the first English-language encyclopedia, it quickly became an important title in the ever-increasing number of published reference works. It was heavily edited, and articles came to be written and signed by well-known scholars. As the scope of scholarship expanded rapidly, so did the Britannica’s size. When the 11th edition was published in 1910, it had increased to a whopping twenty-nine volumes. With that edition, its publication passed to the United States.

Society had come to look at encyclopedias in two ways. First, they were a convenient way of holding large amounts of information, paper cans to put facts and knowledge in. But an equally significant characteristic was that they were also a way of talking about that knowledge in an authoritative way.
So our crowd-sourced, stone-soup encyclopedia, Wikipedia, was born into a world that, on the surface, already had a hugely historic and effective title dominating the encyclopedic landscape.

But did it really?

The value of a reference work lies in its timeliness, its accuracy, and its authority. By 2001, even Britannica’s conservative editorship had allowed digital publication. But they maintained tight control over authorship and editing, leading, of course, to an issue with timeliness. Wikipedia, although the sourcing and authorship was distributed, was able to add, update and correct entries very quickly, literally on an hourly basis.

And that leads to the second important aspect of the value of a reference work: accuracy. The founders and serious participants of Wikipedia quickly developed mechanisms by which entered articles could be flagged, corrected, and objected to. Pieces of missing information could be added, explanations could be expanded, and articles could be removed. And although all of that remained the basis for the greatest objections to Wikipedia, the organization and its world-wide community soldiered on. Finally, in 2005, the highly respected journal Nature published an article in which the two titles were put head to head on the question of accuracy. And although Wikipedia was found to have a few more errors in the selected articles, it was determined that both Britannica and Wikipedia had errors. (Nature 438, 900–901 (15 December 2005))

There is also the ever-important argument of the importance of “authority.” For although Britannica’s reputation had been diminished somewhat when its editorship moved the United State, that could regarded as an issue of intellectual snobbery. The editors remained committed to finding the best possible authors for articles. Wikipedia, of course, was dependent on the intellectual efforts of unvetted volunteers.

But, against our belief in authority, we must place cultural and temporal bias. So, in the 11th edition of the Britannica, in the article on “The Negro,” the scholar Thomas Joyce writes “Mentally the negro is inferior to the white.” Clearly such a statement would never appear in the current edition of any decent encyclopedia. But I put it here to suggest that at the time that anything is published, an author and a few editors might not be in a good position to have the cultural distance to see bias.

So what can be our conclusion on Wikipedia?

Crowdsourcing clearly has its dangers, and therefore its detractors. But faith in unseen authority in edited reference works also has its dangers. Both types of sources inevitably reflect cultural biases and, frequently, have factual errors.

How do we teach students to use Wikipedia? We teach it the way we teach them to use any kind of reference work: read entries carefully and critically, examine them for bias. Use their bibliographies and added links to other materials and collections. Use them as jumping off points to more scholarly works. Use them (carefully) for a general orientation to a subject. And, of course, never use them as a citable source.

In short, as we all know, thoughtful, analytic reading of any source, at any time, is central to a researcher’s successful process. And don’t forget: the stone soup of fable turned out to be really tasty.

Harvesting Gov Docs Locally for Preservation and Discovery

On Wednesday January 10, I had the privilege of presenting the following poster at the annual CTW* Retreat:

Harvesting Gov Docs Locally for Preservation & Discovery. Poster presented at CTW Retreat 10 Jan. 2018.

Harvesting Gov Docs Locally for Preservation & Discovery. Poster presented at CTW Retreat 10 Jan. 2018.

A quick summary of the chart featured prominently in the center of the poster, which is copied from James A. Jacobs’ report Born-Digital U.S. Federal Government Information: Preservation and Access,” and which was re-presented in his October 2017 presentation with James R. Jacobs called “Government Information: Everywhere and Nowhere,” provides an easy way to understand the nature of the problem.

Scope of the Preservation Challenge. Source: Jacobs, 2014.

Scope of the Preservation Challenge. Source: Jacobs, 2014.

The first column represents the number of items distributed by the Government Publishing Office (GPO) to Federal Depository Library Program (FDLP) libraries in 2011 (appx. 10,200 items). The second column represents the total number of items distributed by GPO to FDLP over its entire 200 year history (appx. 2-3 million items). The third column is the number of URLs harvested by the 2008 End of Term crawl (appx. 160 million URLs).

Clearly, the scope of government information produced outside of the GPO and FDLP is very large. So large in fact that what is produced online each year makes the entire 200 year history of the Depository Library Program look like a drop in the bucket. This vast array of online government  information can be called fugitive. No one knows how much born-digital government information has been created or where it all is.

At Connecticut College, Lori Looney and I are exploring ways of being proactive about this situation through our role in the FDLP. While we are unable to participate in large-scale digitization projects, we have nonetheless adopted this idea of being proactive in the FDLP from some of the ideas sketched out in Peter Hernon and Laura Saunders’ College & Research Libraries article “The Federal Depository Library Program in 2023: One Perspective on the Transition to the Future.” We see their proactive approach as preferable to withdrawing from the program altogether or assuming a more passive role within it that would maintain the status quo. We describe our adoption of this approach in our essay “Experience of a New Government Documents Librarian,” published in Susan Caro’s book Government Information Essentials.

Our latest activity addressed by the poster consists of several easy steps that librarians everywhere can do in their own libraries:

  • Keep track of your favorite websites and online publications, and make sure their URLs are captured in the Internet Archive’s Wayback Machine
  • Add rare, hard-to-find, and/or local government documents to your library catalog, as well as digitizing those that are not already available online, and upload them to Internet Archive, ideally with as much catalog metadata as possible
  • Advocate for the long-term value of seemingly obscure government information and help spread the word that short-term ease of accessibility actually masks the major problems associated with long-term preservation, access, and usability

Some of the documents we harvested in this capacity (see a few examples below) are local government publications that may not be easy to find online and which may not be accessible through any other library catalog anywhere. By finding them, adding them to Internet Archive, downloading them, physically adding them to our collection, and adding records to OCLC/WorldCat we are actively supporting preservation and discovery.

Hodges Square creativeplacemaking master plan_Page_01    

2017_draft_comprehensiveenergystrategy_Page_001    NEW LONDON DOWNTOWN TRANSPORTATION AND PARKING STUDY 2017_Page_001

 

This is a very small way of responding to the very large problem of web preservation in general. However, as a small institution with a selective collection of government publications, it is a practical strategy for contributing to the efforts of larger institutions involved with the fascinating and complex problems like the End of Term (EOT) Web Archive.

 

—Andrew Lopez

_____

Works Consulted

Hernon, Peter, and Laura Saunders. “The Federal Depository Library Program in 2023: One Perspective on the Transition to the Future.” College and Research Libraries 70, no. 4 (2009): 351–70.

Jacobs, James A. “Born-Digital U.S. Federal Government Information: Preservation and Access.” Center for Research Libraries: Global Resources Collections Forum, 17 Mar. 2014.

Jacobs, James A., and James R. Jacobs. “Government Information: Everywhere and Nowhere.” Livestream web-based presentation to Government Publications Librarians of New England (GPLNE), 24 Oct. 2017.

Lopez, Andrew and Lori Looney. “Experience of a New Government Documents Librarian.” Government Information Essentials. Ed. Susanne Caro. Chicago: ALA Editions, 2018. 13-20.

Seneca, Tracy, Abbie Grotke, Cathy Nelson Hartman, and Kris Carpenter. “It Takes a Village to Save the Web: The End of Term Web Archive.” DttP: Documents to the People (Spring 2012): 16-23.

_____

*CTW is the library consortium between Connecticut College, Trinity College, and Wesleyan University

Open Data Salon at Hartford Public Library

On Thursday, October 26, I attended the Connecticut Open Data Salon presented in partnership with UCONN Library at the beautiful and spacious Hartford Public Library.

After a brief introduction from Tyler Kleykamp, the chief data officer for the State of Connecticut, I launched into conversation with Steve Batt, the Data Visualization Librarian at UCONN and the Associate Director of the Connecticut State Data Center.

Steve showed be how he uses NHGIS and Tableau Public to make impressive data visualizations using Census data. The visualizations can be seen on the Connecticut State Data Center Tableau dashboard, with one example here:

Data Visualization from CT State Data Center

Data Visualization from CT State Data Center

One can click on the map and zoom in to see an area of specific interest. I was surprised to discover the median household income in New London county in the southeast corner of Connecticut has increased from $59,087 per year in 1979 (adjusted for inflation to 2014 dollars) to $66,693 in 2014.

Next I spoke with Graham Stinnett, Archivist at UCONN, and Anna Lindemann, Assistant Professor of 2D Animation and Motion Graphics, about their collaboration working with the Human Rights Collections at UCONN. Specifically, they showed me how they take photographs from the Romano Archives and transform them digitally to enhance their emotional affect.

A representative from the Korey Stringer Institute talked about the National Center for Catastrophic Sport Injury Research (NCCSIR) and how they collect data nationwide.

It was good to meet with Jennifer Chaput and Renee Walsh, both of UCONN Library, and co-organizers of the Open Data Salon along with Connecticut Data Collaborative.  The CT Data Collaborative is open to partnering with area organizations and has two upcoming events:

Disappearing Government Information and the Effort to Preserve It

[Updated 25 October 2017]

The group Government Publications Librarians of New England (GPLNE) has organized a fall webinar on disappearing government information and the effort to preserve it. We are fortunate to be joined by two leading government information advocates, James R. Jacobs (Stanford University) and James A. Jacobs (Emeritus, UC San Diego) who will lead the presentation after a brief introduction.

Disappearing Government Information poster

Link to poster as PDF

Presentation details:

Who: James R. Jacobs (Stanford Univ.) & James A. Jacobs (UC San Diego)
What: Disappearing Government Information and the Effort to Preserve it
When: Tuesday, October 24, at 2pm EST
Where: Live streaming via GPO: http://login.icohere.com/gpo?pnum=QFH57863

A recording of the presentation is now available online at the following URL:

http://login.icohere.com/gpo?pnum=QFH57863

James R. Jacobs provided a link to the slides for download here:

https://freegovinfo.info/node/12422

Please note the name of the presentation was changed to Government Information: Everywhere and Nowhere.

Some recent related articles include the following:

Please direct additional questions or concerns to Andrew Lopez, Research Support Librarian at Connecticut College: andrew.lopez [at] conncoll.edu

 

 

Research and the Information Process

What is the “information creation process,” and what does it have to do with scholarly research? Short answer: a lot.

Longer answer (if you’re still with me!): In a previous post, I wrote about the first of the threshold concepts developed by the Association of College & Research Libraries’ (ACRL) in its “Framework for Information Literacy for Higher Education.” Threshold concepts are key points that an information-literate student or researcher needs to be able to grasp and utilize. The ACRL’s first threshold concept, “authority is constructed and contextual,” seemed tailor made for a political moment rife with discussions and anxieties about fake news, post truth and alternative facts, and indeed we librarians engaged in several robust discussions with faculty about how to approach this topic in the curriculum.

At first glance, the second threshold concept, “Information creation as a process,” might not resonate quite as strongly. But perhaps it should. Because an understanding of the process of how information is created is necessary for discerning how various books, articles, etc., might be useful or not useful — or for figuring out how authoritative they might be.

First, what does that mean, “information creation as a process”? Information objects — resources that are in various containers, including books, newspaper articles, scholarly journal articles, magazine articles, websites, blog posts, tweets, scholarly proceedings and yes, Facebook posts of dubious origin — all are informed by, and in turn inform, other information objects. In other words, the information contained in these various objects is used to create other objects; in turn, as these objects are read and shared, they may themselves serve to generate tweets, news articles, scholarly articles and books. The process is not merely circular, but infinitely weblike in the way that new information shapes, and is shaped by, already existing sources.

What’s more, the kinds of information objects that are available on any given topic depend on several factors: how long the topic has been in discussion, the extent of the discourse surrounding that topic, and the information objects that have already been created on that topic.

Let’s take, for example a story that’s very much been in the news: Donald Trump’s controversial executive order on immigration from Muslim-majority countries. Although the news of the order hit newspaper websites very quickly following the announcement, even before those stories appeared there were innumerable tweets, Facebook posts and other social media commentary offering even quicker takes.

How useful are these quick takes on the issue? Certainly, they serve as a record of the issue’s explosiveness — its vast potential for altering domestic and global social affairs, politics and business. If one’s project is to document the proliferation of information objects about the issue, then gathering the tweets and quick news pieces would be essential. Similarly, a researcher who sought to document the speculation about the effects of the travel bans would need to look at these early sources.

But the usefulness of any particular information source depends very much on the nature of the project at hand — and that source’s place in the information cycle. What if, instead of looking at the speculation about the executive order, one’s project was about examining its effects? For that, one needs something more analytical — one that views the immigration orders at a distance. One might first look at any newspaper articles that appeared since Trump’s announcement — but again, depending on when the research was being conducted (six months after an event? a year? five years?), more detailed, rigorously conducted scholarly sources might be available.

Let’s look at how this plays out in various searches. Searching Google for “Donald Trump executive order immigration” in April — roughly three months after the initial announcement — yielded nearly 3 million results. Closer to home, entering the same keywords into the library’s CrossSearch tool (which searches books as well as numerous article databases) predictably yields fewer results. (Still quite a few at more than 2,000, but not quite the 3 million that Google unearths.)

Screen Shot 2017-04-19 at 11.35.12 AM

Of these results, most are news articles:

Screen Shot 2017-04-19 at 11.35.41 AM

Only a handful appeared in academic journals, and most of these are quick takes of only a page or two, certainly not the detailed, rigorous, analytical pieces one might expect to find in a scholarly publication. To find such an article, it would be necessary to wait until more time had passed — until scholars had been able to perform studies that involved gathering and interpreting survey data, and that carefully surveyed the available literature on the subject.

Screen Shot 2017-04-19 at 11.36.11 AM

Note, too, that the search doesn’t yield any books at all. That’s because books take even more time than scholarly journal articles to put together. Books need to be proposed, written, edited, rewritten and then published before they can arrive on the scene. The advantage of books, whether they appear in print or electronic form, is that they typically represent some of the most considered, most rigorous thinking on a particular topic. But they take time, and usually depend on the fact that previous news articles, scholarly articles and other materials have already appeared on the topic.

And so a hypothetical researcher seeking to examine the effects of immigration bans would certainly want to look at detailed scholarly journals and books, if they were available. But to determine the kind of questions that are even possible, it’s necessary to understand what materials might be in the information-creation process, and thus how helpful or authoritative they might be in answering the question one poses.

So when you ask a research question, it’s a good idea to think about where a topic might be in the overall information-creation process. The answers will help not only in guiding you to the best possible sources to answer your question, but also in figuring out what might be available in the first place.

— Fred Folmer

Authority: Who Needs It?

(Note: This is the first in a series of planned blog posts exploring the key concepts of the Association of College and Research Libraries’ [ACRL] Framework for Information Literacy.)

In 2016 ACRL — the national organization for academic librarians — finalized its Framework for Information Literacy, a document that had been a number of years in the making. Simply put, the Framework seeks to define and describe six key points, called threshold concepts, that students seeking to become information literate need to understand.

I thought it might be useful in our blog to explore, point by point, the six key concepts of the framework; part of why I thought so is that the first point, “authority is constructed and contextual,” seems to be such an urgent topic to discuss, given the social and political moment in which we find ourselves.

As readers of this blog may recall, last fall I discussed one of the hot buttons generated by the divisive 2016 presidential election: the issue of media literacy or, if you will, fake news. Put succinctly, the fake news crisis can be thought of as a crisis of authority. In what news, media or other information sources can we trust? If people don’t take the time to discern which sources are trustworthy (in other words, authoritative), what can or will they be led to believe, and what does that do to the state of education, research and democracy itself?

The framework — advancing a view of authority that is constructed and contextual — states: “Authority is constructed in that various communities may recognize different types of authority.” It goes on to recognize the need for acknowledging biases “that privilege some sources of authority over others, especially in terms of others’ worldviews, gender, sexual orientation or cultural orientation.” The traditionally dominant voices of authority, in other words, need to be tempered or even overruled by the inclusion of voices that have been long shut out of the conversation. This, to me, is necessary, and frankly unassailable. Authority must be queried and questioned; researchers and students need to pay attention to the social processes by which authority is constructed, and be ready to challenge those processes.

But at the same time, to what extent does the assigning of context to authority mean that facts themselves can be overruled by asserting an alternative authority that claims to have, as has been famously done recently, “alternative facts”? Are facts — which, after all, are important products of authority — themselves constructed and contextual?

One may be tempted to say no — that a fact is a fact, and we’re not entitled to our own facts in the way we are entitled to opinions. While this isn’t exactly wrong, it’s also true that some facts, such as scientific findings, can be revised as new information comes to light. It’s also true that what counts as a fact can vary from discipline to discipline. So do these qualifications leave us in a world without facts — and, some might say, without moorings? Are all researchers thus in a “post-truth” quandary, between a proverbial rock and hard place?

The way through, I think, is to recognize that there simply isn’t anything like a one-size-fits-all edict that provides all the answers to the issue. Instead, researchers need to adopt a set of practices that help to evaluate materials and claims, and to think of this as a process, rather than as a quick judgment or a foregone conclusion. I offer a few suggestions here; some of these are partially based on some of the recommendations that ACRL provides in its “Dispositions” section of the framework.

First, query everything, constantly asking questions about the information’s provenance, its reason for being, its date of creation and its own sources of information. Remember that facts can sometimes be disproved, and that the questioning of established truth is part of a healthy research community (and, by extension, democracy).

Second, place different sources into relation to one another. This is what journalists do when they determine what should go into a story: Is there a second, third or fourth source that corroborates what this first source is arguing? To what does the preponderance of evidence lead? If multiple sources are pointing in the direction of a fact, it’s more likely to be true; but even then, since such things can themselves be the product of groupthink or a particular way of constructing authority, they often need to be qualified or tempered in their description.

The overarching recommendation of the ACRL framework is, as it states, to “develop and maintain an open mind” when thinking about authority: to recognize that authority is constructed socially, and can be made and unmade; that it may be constructed or interpreted differently in various contexts; and that established authority may be related to power.

These points won’t necessarily solve the conundrum about how to know when something is truly, incontrovertibly, absolutely a fact — particularly in an age when information sources claiming to be authoritative are thrown at us in ever greater volume, and at ever greater speed. But they may serve as good points to keep in mind when one is trying to decide how to think about a given claim as authoritative or not.

— Fred Folmer

 

 

Deadline for Library Research Prize: February 12

As a follow-up to a previous item announcing the second annual Connecticut College Prize for Undergraduate Library Research, here’s a quick reminder that applications for the prize ($500 cash!) are due on Sunday, Feb. 12, at 11:59 p.m.

All currently enrolled Connecticut College undergraduates are eligible. Students, please submit an application! Faculty, please encourage students to submit an application!

You can find all the pertinent information concerning rules for entry, project eligibility and more at the Library Research Prize’s webpage (found at http://conncoll.libguides.com/libprize). Entries must be posted to the prize’s Moodle site, which can be accessed using the aforementioned library prize URL.

We look forward to reading your application!

 

 

The ICPSR Undergraduate Research Paper Competition: An Overview of the last 10 Years’ Winners

The Inter-University Consortium for Political and Social Research (ICPSR) will soon be reviewing submissions for its eleventh annual undergraduate Research Paper Competition (submissions are due by midnight PST on 1/31/2017). ICPSR is a consortium of academic institutions and research organizations that maintains an impressive data archive in the social and behavioral sciences, providing access to rich data sets on aging, arts, attitudes, criminal justice, economics, education, elections, political behavior, psychology, substance abuse, terrorism, and other fields. Part of ICPSR’s role as a data steward includes providing a variety of educational opportunities for students and researchers to learn more about working with data. To mark the occasion of this year’s research paper competition, let’s take a look back at the last ten years’ winners.

As one of two ICPSR representatives at my institution (Connecticut College), my interest in the past winners is to see what kind of work they did so I can promote the competition at my home institution. That means I want to see what data are being used and who is using them. One thing that’s great about ICPSR is that every study has a unique ID number, so it can be easily discovered or shared among researchers. Unfortunately for my purposes, some of the information requested on the Entry Form for the competition, specifically the ICPSR study number, does not appear clearly in the public view of past winners.

This is frustrating because it makes it difficult to determine exactly what data are used in the winning papers. One has to open each paper individually and carefully search it. According to my review of all 26 previous undergraduate winners (there are numerous winners each year), more than a third do not cite the unique ICPSR study number (ID) in their references (e.g. of an ICPSR study number would be ICPSR31521, which when searched on the ICPSR site will take you directly to the data set). Another third of past winners do not cite it accurately (e.g. ICPSR02597 does not seem to exist and it is not obvious to remove the first 0 in order to find it). My findings indicate that only a minority of winning papers accurately cite the ICPSR study number (ID) for the data they used. A list of data sets used for each paper is documented in my review linked above.

ICPSR - studyID

In terms of who was using the data, my main interest is in which department the research was undertaken. Unfortunately again, there is neither class- nor department-specific information provided for the winning papers. Instead, the Entry Form asks applicants for their expected majors and/or minors upon graduation. For the most part, this information carries over to the public view of the winners, as represented in the chart below. But to give an example of when it does not, take a look at the 2012 second place RCMD winning paper, “Black Feminism and Hip Hop: A Cross-Generational Disconnect.”* With this winning paper we cannot tell what the student’s expected major was, and moreover we don’t know in which course or department the work was done, except that it was for Professor R. Khari Brown at Wayne State University. But why does the major/minor matter anyway? And if the work was done early in the undergraduate experience, the major could have changed between winning this award and actually graduating.

Besides, an excellent research paper could have been done in Economics, Sociology, or Political Science, for example, by an English or Religious Studies major. The major could therefore be considered irrelevant. What matters is the course and department in which the winning work was done. Such information would lend itself to thinking about ways of replicating or furthering the research in similar courses or departments at other institutions. It is nonetheless interesting to see that the greatest number of past winners for whom a major is given went on to receive degrees in Economics (28.1%), Sociology (18.8%), and Psychology (15.6%). Perhaps more interesting still, is that the next largest group of winners (12.5%) did not list a major; enough to change the results significantly depending on what they were.  

ICPSR - images

Another aspect of who is using the data that matters to me, is what sort of institution they come from. Fortunately, this information is clearly provided for all winning papers. It is not surprising that research universities account for half of the winning research. What is somewhat surprising and a little exciting for me is that Liberal Arts Colleges makeup about a quarter of previous  winners. This means that students at my institution should plan to submit their research and expect to do well, since so many of our peers already have.

ICPSR - institution

My takeaway from this brief review of past winners is that the winning work is impressive and exciting. I want my institution to focus on submitting papers to this competition in the years ahead. However, as a liaison librarian, I wish there was more clear information about what classes and in which departments the winners did their work. I also think it is not sufficiently clear exactly what data were used for many of the winning papers. Moving forward, I recommend that the Research Paper Competition Winners website clearly indicate each of the following for all winning papers:

  • ICPSR study number (ID) used
  • Class in which the research was undertaken
  • Department in which the research was undertaken

While collecting and sharing this kind of information about applicants could help attract interest in the ICPSR Research Paper Competition moving forward, I certainly hope researchers everywhere will take the occasion of this review to spread the word and get submissions ready for the upcoming deadline on January 31, 2017.

—Andrew Lopez

 

*The RCMD competition is for papers written on data held within the Resource Center for Minority Data archive and/or on a topic relevant to the focus of that archive.

Manage Your Citations with RefWorks to Save Time and Keep Organized

Per the request of seniors writing honors theses this year, the librarians offered an advanced workshop on citation style. In the workshop, we took a close look at how the three main style guides (APA, Chicago, and MLA) handle translated sources, and we did some exercises with the additional tools in RefWorks (the citation manager of choice available to all at Connecticut College):

  • Save references on the Web
  • Cite in Microsoft Word
  • Cite in Google Docs

Pros & Cons of Citation Managers

It is important to be clear up front about the advantages/disadvantages of using a citation manager. Now that we can export citations for virtually everything in a library catalog or database, as well as anything listed in Google Scholar, citation managers promise to substantially cut down the number of keystrokes required to compose a list of citations. As the number of citations grows, we are quickly talking about hours of typing labor that can be saved by using a citation manager. Organizationally, it’s a tremendous help to keep all citations stored in one place. The major drawback is that all sorts of typographical errors creep into the citations, whether from the exporting source such as a database, or from within RefWorks itself. It does not take long, however, to recognize the pattern of typos that occur. Just keep an eye on them and make sure to edit them as you go along, or at the end of a project.

Citing Translated Sources

For translated sources, the main style guides do not have a whole lot to say. But they provide just enough guidance that we should be able to document non-English language sources clearly and consistently. That guidance is reflected on these slides, which spell out the main rules for translated sources:

Translated_Sources

Click here to view slides on style guidelines for translated sources

New Tools in RefWorks

In the new RefWorks interface, click on the three dots located on the top white ribbon to access the tools.

Click here for the Tools in RefWorks

Screenshot of RefWorks

The “Save references on the Web” tool can be dragged to your browser and used to capture information from Web pages for composing a citation. It works perfectly on a site like PubMed, which must have really good metadata; less so on the New York Times and other sites, but worth a shot.

The “Cite in Microsoft Word” tool needs to be downloaded and installed according to your operating system and version of MS Word. While there can be as much as a two-hour learning curve in getting this tool up and running, it ultimately promises to be a major time saver. With this tool activated, one can seamlessly insert formatted citations into a Word document, whether parenthetical or footnotes, as well as inserting a bibliography or list of works cited. If at a later date you need to change the citation style of your paper, you can do so with the click of a button, and watch your entire document reformat to the designated citation style.

The “Cite in Google Docs” tool is an Add-on that’s easy to implement. If you need footnotes instead of parenthetical citations, simply manually insert the footnote in Google Docs, then select a reference from the RefWorks sidebar.

Conclusion

As far as citations go, we know there are a thousand exceptions and a million quirky sources that don’t seem to fit the rules laid out in style guides. That’s why we encourage anyone with questions to contact one of our librarians for assistance.

Additional Resources

The Purdue OWL (Online Writing Lab) has long provided a very useful overview of the main rules from the three major citation styles. It’s easy to cross-check what you’re doing with the rules laid out on Purdue OWL.

Our own succinct Citation Guide for Print & Electronic Sources provides links to RefWorks and other leading citation managers, as well as to the major style guides, which are all available at the Reference Desk in Shain Library, and to additional online help.

Additional support from RefWorks:

—Andrew Lopez

 

 

 

 

On Fake News and Research Skills

In light of the emergence of fake news as one of the key stories following the 2016 presidential election, it’s worth (re-)considering the importance of evaluating information to any research process—whether that process involves writing a paper or gathering information about a candidate for office.

Although developing evaluation skills has always been integral to any research process, it’s arguably even more urgently needed now. That’s because libraries are no longer the sole gatekeepers of information, and it’s now possible to simply do a quick search on the web, find something that appears to relate to the topic at hand, and either forward  to someone else, or incorporate it into a paper or other piece of research.

As has been widely reported, a great deal of the fake news now circulates on social media networks. In this New York Times op-ed written by Zeynep Tufekci, a professor of library and information science at the University of North Carolina, the author takes Facebook to task for becoming a platform for misinformation campaigns (the pope endorses Donald Trump! An FBI agent who leaked Hillary Clinton’s emails found dead!).

Part of the problem, Tufekci argues, is Facebook’s algorithmic system, which promotes updates based on whether users find them “comforting.” But research isn’t supposed to be comforting; neither, correspondingly, is the moral and ethical work of citizenship. And helping students learn the moral and ethical work of citizenship is—or should be—in large part why we teach research skills on a college campus.

There have been signs that Facebook is taking steps to limit the fake news stories that are shared on its servers, but researchers—that is, those doing a paper or those simply gathering information to make an informed choice on an election—need to ask themselves a set of questions about every source they’re using, no matter how much the source may support one’s thesis or existing worldview, and no matter how much that source has been useful in the past.

First, who is responsible for the piece? A name isn’t enough; one needs to ask about the author’s credentials or authority to have written something on a particular topic. If it’s a news story, does it come from a reputable service—one that checks its facts, verifies its sources and provides multiple perspectives? Some of the fake Facebook posts came from the “Denver Guardian,” which sounds great until one realizes that no such news source exists. (Go ahead, Google it.)

Second, when was the piece written? In this election season, I saw articles forwarded and shared on social media that had been created months and even years earlier, making it seem as though they had just appeared. But facts and situations can change quickly, and in many research or fact-finding situations, it’s important to have current information, or at least to be aware of when an article appeared so that its date of creation can factor into one’s judgment about it.

Why was the piece written? To report the news, or to advance knowledge in a particular field? To get someone elected to an office? To spread fear, or to propagandize an issue? To make money? This question is often entangled with who wrote the story, but it’s equally important. (To think about the ways in which who wrote a piece can be bound up with why he or she wrote it, I suggest checking out this self-exculpatory New York Times op-ed written by someone who works for WikiLeaks.)

How and where did the author(s) get their information? In scholarly writing, this is precisely why citations must be provided—so that authors cannot simply assert something without some kind of backup. We need to be able to believe what authors are saying; it’s equally important to be able to verify their sources.

I’ve been trying to share the above questions with the first-year seminars with whom I’ve worked this past semester. We’ve looked at sources we found on the web and tried to think about evaluating them based on the above questions, rather than applying such abstract, blanket maxims such as “sites that come from a .edu or .org address are okay.” That’s not necessarily true; it’s always necessary to look closer at each article or book.

One of the first-year seminars I worked with was entitled “Performing Citizenship.” It was striking to me that the course focus and our work with evaluating sources were in particular alignment—and, similarly, that the task of critically evaluating research information and that of truly becoming an informed, participating citizen are one and the same. Whenever we undertake or assign research—and learn or teach the requisite skills to perform this research—we would do well to keep the responsibilities and imperatives of citizenship in full view.

— Fred Folmer

« Older posts Newer posts »

© 2019 ResearchScapes

Theme by Anders NorenUp ↑