This week’s post is not going to capture my ability to be productive while white supremacists appeared to be ushered in and out of the US Capitol building by complicit police and COVID-19 continued to ravage my community because our provincial government doesn’t want to spend money on the most vulnerable.
Instead, I’m just going to share what I’ve learned this week that might prove useful to others.
One of these professors was works in the field of Psychology and I found the most works for that researcher using BASE (Bielefeld Academic Search Engine) including APA datasets not found elsewhere. Similarly, I found obscure ERIC documents using The Lens.org. Unfortunately, you can’t directly import records into The Lens into an ORCiD profile unless you create a Lens profile for yourself.
I’ve added The Lens to my list of free resources to consult when looking for research. This list already includes Google Scholar and Dimensions.ai.
Blogging is dead. Blogging as an ecosystem of blogrolls, blog rings, blog planets, RSS readers, and writers who link and respond to each other… it is long gone. Most people don’t even know that this network once existed, once thrived, and then was lost.
That being said, I still believe blogging is good. Blogging can be personally meaningful and professionally useful and blogging can still be powerful. Small communities of bloggers still exist in niches, like food blogs.
But in many ways, the once mighty blog post has been reduced to being a fall-back longer form entry that is meant to be carried and shared by social media. Most of my own traffic comes indirectly. Last month a post of mine received over 1000 reads in a day – with almost all traffic coming from Facebook. But as I can’t follow back the trail, I have no idea who shared the link to my blog or why.
I have also seen blog posts being shared from author to reader to reader-once-removed via newsletter. When a particular article resonates, you can sometimes see it appear in a new newsletter every week, each recommendation like a ripple in a pond — a little bit of text pushing the readership of a piece of writing just a bit wider than the original audience.
While I get a rush of serotonin every time something I write resonates with readers who share my writing, I still want to write work that decidedly isn’t mean to resonate with a wide audience. I still want to have a place where I can write and share posts that might be useful to some readers.
What I’m trying to say is, I want to share a boring bit of writing now and I know it’s boring and I want you to know that I’m aware that it’s boring.
I have two recommended practices that I would like to share with those who might find it useful as many of us are now working in a always online environment. These practices have worked for me and they might work for you. (Your mileage may vary. All advice is autobiographical.)
The first practice is one that I saw recommended by Dave Cormier and I was so pleased to see his recommendation, because I do that thing and it felt very validating. That suggested practice is to always keep a window open to a screen – for you it might be a word document, but for me, it’s a Google Document – in which you keep available for any time you need to drop a note or a link or an idea to return to later.
There are many people who have amazing systems to manage their online ‘to do’ lists but I have found that creating a next action for every interest and facet of my person (as a librarian, as a mom, as a reader, as someone trying to eat healthier, as a gardener…) as too much for me. Instead, I have found sustained success in the much more low-key logbook. I have one for work and one for home.
On February 19, 2019, I created a Work Log google doc. I know this because I started with a H2 heading of February 19, 2019 and then added a series of bullet points of what I had done that day. Sometimes I drop links to matters that I need to read or follow up on. And when there’s something that I need to do and I don’t want to forget it, I add three asterisks *** so I can go back and Control-F my log into a Todo list. The next day, I add the new date at the top of the page and begin again. And that’s it. That’s my system. It’s like I’m perpetually stuck on step one of proper bullet journaling.
The second suggestion is a practice that I’m setting up right now, which is why I was inspired to write this blog post in the first place.
On July 1st, my workplace transitions to the next working year. For the last ten years now, I use the year’s roll over as an opportunity to create a new folder in my Inbox for the upcoming year’s work. This year the folder reads .2020-2021
I learned this technique when I accidentally saw the screen of my colleague and saw how she organized her email. I have to admit, I was first sort of shocked by this approach. Why create nesting folders of email by year? Why not work on creating folders by subject? ARE WE NOT LIBRARIANS?
But this is the thing. Even librarians cannot know a priori what categories are going to be useful in the future. Rather than create a file system that works for you for a while but then slowly, slowly grows to become, over the years, a misshapen file tree of deep sub-folders and dead main branches… consider starting new. Considering starting a new inbox from scratch every calendar year. And don’t create a single sub-folder within that folder until you receive an email that needs to be put away, and if doesn’t have a place already that makes sense, create a place for that kind of email.
At the very least, for a new short months, everything will feel findable and understandable and it will feel wonderful. That is, if you live a life as boring as mine.
Maybe this is the real feature that separates blogging from social media: it’s the place where we can be boring.
On Saturday night I had a Zoom call with a friend of mine from high school. My friend prefaced our chat with a warning that she was going to keep the conversation short because video calls are exhausting. I heartily agreed. During this call, my daughter and her son would grace our screens and through them, excitedly shared what game-spaces in Roblox they go to play and hangout in, with their friends.
This difference between exhaustion and joy struck me. I didn’t think it was because of any particular characteristic of our respective generations, but I couldn’t entirely place why the reactions were so very different. But then on Sunday morning, during the time in which I dedicate to my longreads collected from a week’s worth of tweets and newsletters, I found an answer that made a lot sense to me.
That essay was Home Screens by Drew Austin from the web publication, Real Life. After I finished, I promptly took to twitter to share my recommendation for everyone to read it. Here’s a passage from it, dedicated to Zoom:
Pure economic exchanges can relocate to screen interactions with a minimal loss of fidelity, but encounters meant to be less instrumental are proving harder to sustain without the texture of physical space. Most of the apps we use for interaction simply unbundle an informational component from the scene of social contact. This was sufficient under ordinary circumstances, when messaging and video conferencing apps merely complemented in-person exchanges. But now those tools leave users wanting more, failing to substitute the richness and depth that interaction in physical space could otherwise provide.
Consider, for example, the video-conferencing platform Zoom. During the quarantine’s first few weeks, it emerged as a flexible (albeit insecure) tool for conducting interactions that could no longer happen face to face, rapidly expanding beyond its established domain of business meetings to accommodate gatherings ranging from happy hours to dinner parties to dates. But rather than providing support for adjacent activities, as an app like Slack does for office work, Zoom replaces those activities altogether. In other words, users experience Zoom more as a stultified form of virtual reality than an augmented one, because it feels as though there is very little off-screen reality available to augment right now.
I’m writing about this essay on this blog rather than my more technology focused outlet, because I want to start exploring this understanding that there is something fundamentally different between ‘virtual libraries’ and ‘augmented libraries’.
In Home Screens, Austin draws on one of my favourite written works from last year:
In How to Do Nothing, Jenny Odell makes an eloquent case for the importance of place as a site of non-transactional human relations. As an example, she describes how, for many, public transportation is “the last non-transactional space in which we are regularly thrown together with a diverse set of strangers, all of whom have different destinations for different reasons.” She goes on to summarize Louis Althusser’s contention that true societies can emerge only within spatial constraints, where individuals live in bounded proximity without the ability to easily disperse. In such settings, individuals have no choice but to encounter one another repeatedly and establish durable connections based upon a firmer foundation than the exchange value those relationships promise. This represents a quite different logic than that of an app that enables hiring random (and often unseen) strangers to perform tasks for us at a social distance.
Another non-transactional space in which residents are regularly thrown together with a diverse set of strangers, all of whom have different ‘destinations’ for different reasons, is the library.
Librarians are what the internet is aching for — people on task to care about the past, with respect to the past and also to what it shall bequeath to the future. There needs to be rituals in place online to treat people — users — with dignity, both for the living and the dead. For to speak of the humanity of internet users is to recognize the impermanence, the mortality of that humanity.
Everyone is welcome in a library just for being. A person in a library is a person: homeless or not, hurting or not. My dream for the internet, as a final form, is a civic and independent body, where all people are welcomed and respected, guided by principles of justice, rights, and human dignity. For this, users would express care in return, with a sense of purpose and responsibility to the digital spaces organized with these values. With the internet routing through a planet that is the origin of more than a hundred billion lives, such a project means information in abundance. Segmenting and clustering users and history into communities, rather than mass-purpose platforms, would be an integral component to this ideal internet in its cycles of maintenance and renewal.
I haven’t been a public librarian in over twenty years now, so I am going to limit the following thoughts on augmented vs virtual library space in an academic library context.
First, let us consider that more students come to the library to study rather than to actively engage with library-provided materials, print or otherwise. Does this suggest that the academic library has a responsibility to provide online study space for students?
With our ability to roam the physical environment necessarily compromised, our platforms – Netflix, Instagram, Twitter, Spotify, etc. etc. – have taken on an even greater significance as the sites of our work and leisure. But how do we inhabit them in psychogeographic terms, as virtual spaces that shape our behaviours and emotions? Is it possible to find alternative paths to the passive consumption modalities that a data-driven culture industry expects of us? Can we amble through our platforms in ways unforeseen by their designers? And understand their infrastructures better through our experiments and investigations?
Ergo, a psychogeographical approach to platform studies as a means to engage with these infrastructures in novel ways (please note: I am not a licensed psychogeographer).
It delights me to no end that Devon published the above as I was writing the draft of this post because I also want to speculate that perhaps we should investigate sound as a platform (please note: I am a licensed psychogeographer).
When I was a child, the walls of books in the adult section of our modest public library always filed me with unease and even dread. So many books that I would never read. So many books I suspected – even then – that were never read. I was under the impression that all the books were so old that the authors must all be dead. Unlike my refuge – the children’s section of the library, partitioned by a glass door set in a glass wall – this section of the library was dark and largely silent. The books were ghosts.
I am imagining a library that is made up of two distinct sections. These sections may be on separate floors. They may be in separate buildings. But these sections must be separated and distinct.
One of these sections would be ‘The Library of the Living’. It would be comprised of works by authors who still walked on the earth, somewhere, among us. The other section would be ‘The Library of the Dead’.
When an authors passes from the earthly realm, a librarian take their work from the Library of the Living and bring it, silently, to the Library of the Dead.
And at the end of this text was this:
“We don’t have much time, you know. We need to find the others. We need to find mentors. We need to be mentors. We don’t have much time.”
Introduction: Secret Feminist Agenda & Masters of Text
I am an academic librarian who has earned permanence – which is the word we use at the University of Windsor to describe the librarian-version of tenure. When I was hired, there was no explicit requirement for librarians to publish in peer-reviewed journals. Nowadays, newly hired librarians at my place of work have an expectation to create peer-reviewed scholarship, although the understanding of how much and what kinds of scholarship count has not been strictly defined.
On my official CV that I update and submit to my institution every year, these peer-reviewed articles are listed individually. Under, “Non-referred publications”, I have a single line for each of my blogs. And yet, I have done so much more writing on blogging platforms compared to my peer reviewed work (over 194K words from 2006-2016, alone). And my public writing has been shared, saved, and read many, many times over my peer-reviewed scholarship.
Now, as I have previously stated, I already have permanence. So why should I care if my blog writing counts in my work as an academic librarian?
That was my thinking, so I didn’t care. That is, until a couple of weeks ago when a podcast changed my mind.
McGregor’s podcast is part of a larger SSHRC-funded partnership called Spoken Web that “aims to develop a coordinated and collaborative approach to literary historical study, digital development, and critical and pedagogical engagement with diverse collections of spoken recordings from across Canada and beyond”.
What did we learn about scholarly podcasting… How and when and where we create new knowledge, that’s what we call scholarship, generally, right?
Secret Feminist Agenda, 3.26
Their conversation about what counts as scholarship and how it can be valued is a great listen. And it opened the possibility in my mind to consider this writing a form of creative, critical work.
While most of my public writing is explanatory or persuasive in nature, there is definitely a subset of my work that I would consider a form of creative practice. I know that these works are creative because when I sit down to write them, I don’t have an idea of the final form of the text until it is finished. I am compelled to work through ideas that I feel might have something to them, but the only way to tell is to get closer.
The second passage that struck me comes in at the 52:26 mark, when Hannah tells this story:
Hannah: I met a prof at the Modernist Studies Association Conference a few years ago who was telling me that he does a comic book podcast with a friend of his and they’ve been doing it for years and it has quite a popular following, and I was like, “Oh, awesome! Do you count that as your scholarly output?” and he said “No, I don’t need to. I have tenure.” And I was like, “Well, but, couldn’t you use tenure as a way to break space open for those who don’t but want to be doing that kind of work? Isn’t there another way to think about what it means to have security as a position from which you can radicalize?”, but that so often doesn’t seem to prove to be the case.
Ames: “Well, and now we’re back to that’s feminist thinking -what you said there and what that person is illustrating is not feminist thinking…”
Secret Feminist Agenda, 3.26
Oof. Hearing that bit was a bit of a gut-punch.
I can and will do better.
That being said, I’m not entirely sure how my corpus of public writing should be accounted for. Obviously, the volume of words produced is not an appropriate measure. Citation counts from scholarly works might be deemed a valuable measure as but many scholars deliberately exclude public writing from their bibliographies, I feel this metric systematically undervalues this type of writing. And while page views and social media counts should stand for something, I don’t think you can make the case that popularity is an equivalent of quality.
And here is the script and the slides that I presented:
Good afternoon. Thank you for the opportunity to introduce you to OpenRefine.
Even if you have already heard of OpenRefine before, I hope you will still find this session useful as I have tried to make a argument why librarians should invest investigate technologies like OpenRefine for Scholarly Research purposes.
This is talk has three parts.
I like to call OpenRefine the most popular library tool that you’ve never heard of.
After that this introduction, I hope that this statement will become just a little less true.
OpenRefine began as Google Refine and before that it was Freebase Gridworks, created by David François Huynh in January 2010 while he was working at Metaweb, the company that was responsible for Freebase.
In July 2010, Freebase and Freebase Gridworks were bought by Google adopted the technology and Freebase Gridworks was rebranded as Google Refine. While the code was always open source, Google supported the project until 2012. From that point on, the project became a community supported Open Source product and as such was renamed OpenRefine.
As an aside, Freebase was officially shut down by Google in 2016 and the data from that project was transferred to Wikidata, Wikimedia’s structured data project.
OpenRefine is software written in Java. The program is downloaded on to your computer and accessible through the browser. OpenRefine includes web server software and so it is not necessary to be connected to the internet in order to make use of OpenRefine.
At the top of the slide is a screen capture of what you first see when you start the program. The dark black window is what opens behind the scenes if you are interested in monitoring the various processes that you are putting OpenRefine through. And in the corner of the slide, you can see a typical view of OpenRefine with data in it.
OpenRefine has been described by its creator as “a power tool for working with messy data”
Wikipedia calls OpenRefine “a standalone open source desktop application for data cleanup and transformation to other formats, the activity known as data wrangling”.
OpenRefine is used to standardize and clean data across your file or spreadsheet.
That being said, once you know the power of OpenRefine, you will like myself see all these other potential uses for the tool outside of metadata cleanup. In August of this year, I read this tweet from fellow Canadian scholarly communications librarian, Ryan Reiger and sent some links that had instructions that illustrated how OpenRefine could help with this research question
When introducing new technology to others, it’s very important not to over sell it and to manage expectations.
But I’m not the only one who feels strongly about the power of OpenRefine. For good reasons, which we will explore in the second section of this talk
If you asked me what is the most popular technology used by librarians in their work and support of scholarship, I would say that one answer could be Microsoft Excel. Many librarians I know do their collections work, their electronic resources work, and their data work in Excel and they are very good at it.
But there are some very good reasons to reconsider using Excel for our work.
This slide outlines what I consider some of the strongest reasons to consider using OpenRefine. First, the software is able to handle more types of data than Excel can. Excel can handle rows of data. Open Refine can handle rows and records of data.
For many day to day uses of Excel it is unlikely you will run into the maximum capacity of the software but for those who work with large data sets, a limit of a 1 million and change rows might and can be a problem.
But the most important reason why we should consider OpenRefine is the same reason why it’s fundamentally different than Excel. Unlike spreadsheets software like Excel, no formulas are stored in the cells of OpenRefine
Instead formulas are used to transform the data and these formulas are tracked as scripts.
Not only do the cells of Excel contain formulas that transform the data presented in ways that are not always clear, Excel sometimes transforms your data without clearly demonstrating that it is doing so. According to a paper from 2016, one fifth of genomics journals had datasets with errors from Excel transforming gene names such as SEPT10 as dates.
I want to be clear, I am not saying that Excel is bad and people who use Excel are also bad.
We can all employ good data practices whether we use spreadsheets or data wrangling tools such as OpenRefine. I believe we have to meet people where they are with their data tool choices. This is part of the approach taken by the good people responsible for this series of lessons as Part of the Ecology Curriculum of Data Carpentry.
And with that, I just want to take the briefest of moments to thank the good people behind Software Carpentry and Data Carpentry – collectively now known as The Carpentries as I am pretty sure it was their work that introduced me to the world of OpenRefine
This slide is taken from the Library Carpentry OpenRefine lesson. There is too much text on the slide to read but the gist of the message is this: OpenRefine saves every change you make to your dataset and these changes are saved as scripts. After you clean up a set of messy data, you can export your script of the transformations you made and then you can apply that script to other similarly messy data sets
Not only does this ability saves the time of the wrangler, this ability to save scripts separately from the data itself lends itself to Reproducible Science.
Here is a screenshot of a script captured in OpenRefine in both English and in JSON.
It is difficult for me to express how important and how useful it is for OpenRefine to separate the work from data in this way.
This is the means by which librarians can share workflows outside of their organizations without worrying about accidentally sharing private data.
As more librarians start using more complicated data sets and data tools for supporting research and their own research, the more opportunities there will be for embodying, demonstrating, and teaching good data practices.
I remember the instance in which I personally benefited from someone sharing their work with OpenRefine. It was this blog post from Angela Galvan which walked me through the process of looking up a list of ISSNs and running that list through the Sherpa Romeo API and using this formula on the screen to quickly and clearly present whether a particular journal allowed publisher PDFs to be added to the institutional repository or not.
And with that, here’s a bit of tour of how libraries are using OpenRefine in their work
I haven’t spent much time highlighting it, but one of the most appreciated features of OpenRefine are related to its data visualizations that allow the wrangler to find differences that make a difference in the data.
The slide features two screens captures. In the lower screen, OpenRefine has used fuzzy matching algorithms to discover variations of entries that are statistically likely meant to be the same.
I had mentioned previously that I had used OpenRefine to use the Sherpa/Romero API. This ability of OpenRefine to allow users access to the API who may not be entirely comfortable using command-line scripting or programming should not be understated. That’s why lesson plans that use OpenRefine to perform such tasks as web scraping as pictured here, are appreciated.
With OpenRefine, libraries are finding ways to use reconciliation services for local projects. I am just going to read the last bit of the last line of this abstract for emphasis: a hack using OpenRefine yielded a 99.6% authority reconciliation and a stable process for monthly data verification. And as you now know, this was likely done through OpenRefine scripts.
OpenRefine has proved useful in preparing linked data…
And if staff feel more comfortable using spreadsheets, OpenRefine can be used to covert those spreadsheets into forms such as MODSXML
Knowing the history of OpenRefine, you might not be surprised to learn that it has built in capabilities to reconcile data to controlled vocabularies…
But you might be pleasantly surprised to learn that OpenRefine can reconcile data from VIVO, JournalTOC, VIAF, and FAST from OCLC.
But the data reconciliation service that I’m particular following is from Wikidata.
In this video example, John Little uses data from VIAF and Wikidata to gather authoritative versions of author names plus additional information including their place of birth.
I think it’s only appropriate that OpenRefine connects to Wikidata when you remember that both projects had their origins in the Freebase project.
Wikidata is worthy of its own talk – and maybe even its own conference – but since we are very close to the end of this presentation, let me introduce you to Wikidata as structured linked data that any one can use and improve.
I was introduced to the power of Wikidata and how its work could extend our library work from librarians such as Dan Scott and Stacy Allison Cassin. In this slide, you can see a screen capture from Dan’s presentation that highlights that the power of Wikidata is that it doesn’t just collect formal institutional identifiers such as from LC or ISNI but sources such as AllMusic.
And this is the example that I would like to end my presentation on. The combination of OpenRefine and Wikidata – working together – can allow the librarian not only to explore, clean up and normalize their data sets – OpenRefine has the ability to extend our data and to connect it to the world.
The trouble is mine: I am not interested in a measured account of the lives of coders in America. I think the status quo for computing is dismal.
The way that we require people to have to think like a computer in order to make use of computers is many things. It is de-humanizing. It is unnecessary hardship. It feels wrong.
This is why Bret Victor’s Inventing on Principle (2012) presentation was (and remains) so incredible to me. Victor sets out to demonstrate that creators need (computer) tools that provide them with the most direct and immediately responsive connection to their creation as possible:
If we look to the past, we can find alternative futures to computing that might have served us better, if we had only gone down a different path. Here’s Bret Victor’s The Future of Programming 1973 (2013) which you should watch a few minutes of, if just to appreciate his alternative to powerpoint:
At the around the 11 minute mark, Evans sets the scene for the first unveiling of Tim Berner Lee’s World Wide Web, and it’s a great story because when Lee first demonstrated his work at Hypertext ’91, the other attendees were not particularly impressed. Evans explains why.
So why am I telling you all about the history of computing on my library-themed blog? Well, one reason is that our profession has not done a great job of knowing our own (female) history of computing.
(There was now deleted post from a librarian blog from 2012 that comes to mind. I’m not entirely sure of the etiquette of quoting deleted posts, so I will paraphrase the post as the following text…)
Despite librarianship being a feminized and predominantly female profession, [author of aforementioned blog post] remarked that she was never introduced to the following women in library school despite their accomplishments: Suzanne Briet, Karen Spärck-Jones, Henriette Avram, and Elaine Svenonius. And if my memory can be trusted, I believe the same was true for myself.
Is there a connection between a more human(e) type of computing that Bret Victor advocates for with the computing innovations from women that Claire Evans wants to learn from and these lesser known women of librarianship and its adjacent in computing? I think there might be.
When most scientists were trying to make people use code to talk to computers, Karen Sparck Jones taught computers to understand human language instead.
In so doing, her technology established the basis of search engines like Google.
A self-taught programmer with a focus on natural language processing, and an advocate for women in the field, Sparck Jones also foreshadowed by decades Silicon Valley’s current reckoning, warning about the risks of technology being led by computer scientists who were not attuned to its social implications
“A lot of the stuff she was working on until five or 10 years ago seemed like mad nonsense, and now we take it for granted,” said John Tait, a longtime friend who works with the British Computer Society.
I have already given my fair share of future of the library talks already, so I think it is for the best if some one else takes up the challenge of looking to into the work of librarians past to see if we can’t refactor our present into a better future.
Libraries are haunted houses. As our patrons move through scenes and illusions that took years of labor to build and maintain, we workers are hidden, erasing ourselves in the hopes of providing a seamless user experience, in the hopes that these patrons will help defend Libraries against claims of death or obsolescence. However, ‘death of libraries’ arguments that equate death with irrelevance are fundamentally mistaken. If we imagine that a collective fear has come true and libraries are dead, it stands to reason that library workers are ghosts. Ghosts have considerable power and ubiquity in the popular imagination, making death a site of creative possibility. Using the scholarly lens of haunting, I argue that we can experience time creatively, better positioning ourselves to resist the demands of neoliberalism by imagining and enacting positive futurities.
I also think libraries can be described as haunted but for other reasons than Settoducato suggests. That doesn’t mean I think Settoducato is wrong or the article is bad. On the contrary – I found the article delightful and I learned a lot from it. For example, having not read Foucault myself, this was new to me:
In such examples, books are a necessary component of the aesthetic of librarianship, juxtaposing the material (books and physical space) with the immaterial (ghosts). Juxtaposition is central to Michel Foucault’s concept of heterotopias, places he describes as “capable of juxtaposing in a single real place several spaces, several sites that are in themselves incompatible” (1984, 6). Foucault identifies cemeteries, libraries, and museums among his examples of heterotopias, as they are linked by unique relationships to time and memory. Cemeteries juxtapose life and death, loss (of life) and creation (of monuments), history and modernity as their grounds become increasingly populated. Similarly, libraries and museums embody “a sort of perpetual and indefinite accumulation of time in an immobile place,” organizing and enclosing representations of memory and knowledge (Foucault 1984, 7).
There are other passages in Intersubjectivity that I think could be expanded upon. For example, while I completely agree with its expression that the labour of library staff is largely invisible, I believe that particular invisibility was prevalent long before neoliberalism. The librarian has been subservient to those who endow the books for hundreds of years.
Richard Bentley, for his part, continued to run into problems with libraries. Long after the quarrel of the ancients and moderns had fizzled, he installed a young cousin, Thomas Bentley, as keeper of the library of Cambridge’s Trinity College. At Richard’s urging, the young librarian followed the path of a professional, pursuing a doctoral degree and taking long trips to the Continent in search of new books for the library. The college officers, however, did not approve of his activities. The library had been endowed by Sir Edward Stanhope, whose own ideas about librarianship were decidedly more modest than those of the Bentleys. In 1728, a move was made to remove the younger Bentley, on the ground that his long absence, studying and acquiring books in Rome and elsewhere, among other things, disqualified him from the post. In his characteristically bullish fashion, Richard Bentley rode to his nephew’s defense. In a letter, he admits that “the keeper [has not] observed all the conditions expressed in Sir Edward Stanhope’s will,” which had imposed a strict definition of the role of librarian. Bentley enumerates Sir Edward’s stipulations, thereby illuminating the sorry state of librarianship in the eighteenth century. The librarian is not to teach or hold office in the college; he shall not be absent from his appointed place in the library more than forty days out of the year; he cannot hold a degree above that of master of arts; he is to watch each library reader, and never let one out of his sight
“He is to watch each library reader” is a key phrase here. From the beginning, librarians and library staff were installed as instruments of surveillance as a means to protect property.
Even to this day, I will hear of university departments who wish to make a collection of material available for the use of faculty and students and are so committed to this end that they will secure a room, which is no small feat on a campus nowadays. But then the faculty or students refuse to share their most precious works because they realize that their materials in an open and accessible room will be subject to theft or vandalism.
Same as it ever was.
Presently, a handful of municipal libraries in Denmark operate with open service models. These open libraries rely on the self-service of patrons and have no library staff present—loans, returns, admittance and departing the physical library space are regulated through automated access points. Many public library users are familiar with self-check out kiosks and access to the collections database through a personal computing station, but few patrons have ever been in a public library without librarians, staff workers or security personnel. Libraries that rely on self-service operation models represent a new kind of enclosed environment in societies of control. Such automated interior spaces correspond to a crisis in libraries and other institutions of memory like museums or archives. Under the guise of reform, longer service hours, and cost-saving measures, libraries with rationalized operating models conscript their users into a new kind of surveillance….
The open library disciplines and controls the user by eliminating the librarian, enrolling the user into a compulsory self-service to engage with the automated space. The power of this engagement is derived from a regime of panoptic access points that visualize, capture and document the user’s path and her ability to regulate herself during every movement and transition in the library—from entering, searching the catalog, browsing the web, borrowing information resources, to exiting the building.
Because of these technologies, many, many spaces are going to feel haunted. Not just libraries:
The other day, after watching Crimson Peak for the first time, I woke up with a fully-fleshed idea for a Gothic horror story about experience design. And while the story would take place in the past, it would really be about the future. Why? Because the future itself is Gothic.
First, what is Gothic? Gothic (or “the Gothic” if you’re in academia) is a Romantic mode of literature and art. It’s a backlash against the Enlightenment obsession with order and taxonomy. It’s a radical imposition of mystery on an increasingly mundane landscape. It’s the anticipatory dread of irrational behaviour in a seemingly rational world. But it’s also a mode that places significant weight on secrets — which, in an era of diminished privacy and ubiquitous surveillance, resonates ever more strongly….
… Consider the disappearance of the interface. As our devices become smaller and more intuitive, our need to see how they work in order to work them goes away. Buttons have transformed into icons, and icons into gestures. Soon gestures will likely transform into thoughts, with brainwave-triggers and implants quietly automating certain functions in the background of our lives. Once upon a time, we valued big hulking chunks of technology: rockets, cars, huge brushed-steel hi-fis set in ornate wood cabinets, thrumming computers whose output could heat an office, even odd little single-purpose kitchen widgets. Now what we want is to be Beauty in the Beast’s castle: making our wishes known to the household gods, and watching as the “automagic” takes care of us. From Siri to Cortana to Alexa, we are allowing our lives and livelihoods to become haunted by ghosts without shells.
How can we resist this future that is being made for us but not with us? One of my favourite passages of Intersubjectivty suggests a rich field of possibility that I can’t wait to explore further:
However, it does not have to be this way. David Mitchell and Sharon Snyder also take up the questions of embodiment and productivity, examining through a disability studies lens the ways in which disabled people have historically been positioned as outside the laboring masses due to their “non-productive bodies” (2010, 186). They posit that this distinction transforms as the landscape of labor shifts toward digital and immaterial outputs from work in virtual or remote contexts, establishing the disabled body as a site of radical possibility. Alison Kafer’s crip time is similarly engaged in radical re-imagining, challenging the ways in which “‘the future’ has been deployed in the service of compulsory able-bodiedness and able-mindedness” (2013, 26-27). That is, one’s ability to exist in the future, or live in a positive version of the future is informed by the precarity of their social position. The work of theorists like Mitchell, Snyder, and Kafer is significant because it insists on a future in which disabled people not only exist, but also thrive despite the pressures of capitalism.
It appears that I haven’t written a single post on this blog since July of 2018. Perhaps it is all the talk of resolutions around me but I sincerely would like to write more in this space in 2019. And the best way to do that is to just start.
This week on Function, we take a look at the rising labor movement in tech by hearing from those whose advocacy was instrumental in setting the foundation for what we see today around the dissent from tech workers.
Anil talks to Leigh Honeywell, CEO and founder of Tall Poppy and creator of the Never Again pledge, about how her early work, along with others, helped galvanize tech workers to connect the dots between different issues in tech.
I thought I was familiar with most of Leigh’s work but I realized that wasn’t the case because somehow her involvement with the Never Again pledge escaped my attention.
Here’s the pledge’s Introduction:
We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.
We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes.
Today we stand together to say: not on our watch, and never again.
The episode reminded me that while I am not an employee in the United States who is directly complicit with the facilitation of deportation, as a Canadian academic librarian, I am not entirely free from some degree of complicity as I am employed at a University that subscribes to WESTLAW .
The Intercept is reporting on Thomson Reuters response to Privacy International’s letter to TRI CEO Jim Smith expressing the watchdog group’s “concern” over the company’s involvement with ICE. According to The Intercept article “Thomson Reuters Special Services sells ICE ‘a continuous monitoring and alert service that provides real-time jail booking data to support the identification and location of aliens’ as part of a $6.7 million contract, and West Publishing, another subsidiary, provides ICE’s “Detention Compliance and Removals” office with access to a vast license-plate scanning database, along with agency access to the Consolidated Lead Evaluation and Reporting, or CLEAR, system.” The two contracts together are worth $26 million. The article observes that “the company is ready to defend at lease one of those contracts while remaining silent on the rest.”
I also work at a library that subscribes to products that are provided by Elsevier and whose parent company is the RELX Group.
In 2015, Reed Elsevier rebranded itself as RELX and moved further away from traditional academic and professional publishing. This year , the company purchased ThreatMetrix, a cybersecurity company that specializes in tracking and authenticating people’s online activities, which even tech reporters saw as a notable departure from the company’s prior academic publishing role.
In some libraries, there are sometimes particular collections in which the objects are organized by the order in which they were acquired (at my place of work, our relatively small collection of movies on DVD are ordered this way). This practice makes it easy for a person to quickly see what has been most recently been received or what’s been newly published. Such collections are easy to start and maintain as you just have to sort them by ‘acquisition number’.
But you would be hard pressed to find a good reason to organize a large amount of material this way. Eventually a collection grows too large to browse in its entirety and you have people telling you that they would rather browse the collection by author name, or by publication year, or by subject. But to allow for this means organizing the collection and let me tell you my non-library staff friends, such organization is a lot of bother — it takes time, thought and consistent diligence.
Which is why we are where we are with today’s state of the web.
Early homepages were like little libraries…
A well-organized homepage was a sign of personal and professional pride — even if it was nothing but a collection of fun gifs, or instructions on how to make the best potato guns, or homebrew research on gerbil genetics.
Dates didn’t matter all that much. Content lasted longer; there was less of it. Older content remained in view, too, because the dominant metaphor was table of contents rather than diary entry.
Everyone with a homepage became a de facto amateur reference librarian.
Movable Type didn’t just kill off blog customization.
It (and its competitors) actively killed other forms of web production.
Non-diarists — those folks with the old school librarian-style homepages — wanted those super-cool sidebar calendars just like the bloggers did. They were lured by the siren of easy use. So despite the fact that they weren’t writing daily diaries, they invested time and effort into migrating to this new platform.
They soon learned the chronostream was a decent servant, but a terrible master.
We no longer build sites. We generate streams.
All because building and maintaining a library is hard work.