It appears that I haven’t written a single post on this blog since July of 2018. Perhaps it is all the talk of resolutions around me but I sincerely would like to write more in this space in 2019. And the best way to do that is to just start.
This week on Function, we take a look at the rising labor movement in tech by hearing from those whose advocacy was instrumental in setting the foundation for what we see today around the dissent from tech workers.
Anil talks to Leigh Honeywell, CEO and founder of Tall Poppy and creator of the Never Again pledge, about how her early work, along with others, helped galvanize tech workers to connect the dots between different issues in tech.
I thought I was familiar with most of Leigh’s work but I realized that wasn’t the case because somehow her involvement with the Never Again pledge escaped my attention.
Here’s the pledge’s Introduction:
We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.
We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes.
Today we stand together to say: not on our watch, and never again.
The episode reminded me that while I am not an employee in the United States who is directly complicit with the facilitation of deportation, as a Canadian academic librarian, I am not entirely free from some degree of complicity as I am employed at a University that subscribes to WESTLAW .
The Intercept is reporting on Thomson Reuters response to Privacy International’s letter to TRI CEO Jim Smith expressing the watchdog group’s “concern” over the company’s involvement with ICE. According to The Intercept article “Thomson Reuters Special Services sells ICE ‘a continuous monitoring and alert service that provides real-time jail booking data to support the identification and location of aliens’ as part of a $6.7 million contract, and West Publishing, another subsidiary, provides ICE’s “Detention Compliance and Removals” office with access to a vast license-plate scanning database, along with agency access to the Consolidated Lead Evaluation and Reporting, or CLEAR, system.” The two contracts together are worth $26 million. The article observes that “the company is ready to defend at lease one of those contracts while remaining silent on the rest.”
I also work at a library that subscribes to products that are provided by Elsevier and whose parent company is the RELX Group.
In 2015, Reed Elsevier rebranded itself as RELX and moved further away from traditional academic and professional publishing. This year , the company purchased ThreatMetrix, a cybersecurity company that specializes in tracking and authenticating people’s online activities, which even tech reporters saw as a notable departure from the company’s prior academic publishing role.
In some libraries, there are sometimes particular collections in which the objects are organized by the order in which they were acquired (at my place of work, our relatively small collection of movies on DVD are ordered this way). This practice makes it easy for a person to quickly see what has been most recently been received or what’s been newly published. Such collections are easy to start and maintain as you just have to sort them by ‘acquisition number’.
But you would be hard pressed to find a good reason to organize a large amount of material this way. Eventually a collection grows too large to browse in its entirety and you have people telling you that they would rather browse the collection by author name, or by publication year, or by subject. But to allow for this means organizing the collection and let me tell you my non-library staff friends, such organization is a lot of bother — it takes time, thought and consistent diligence.
Which is why we are where we are with today’s state of the web.
Early homepages were like little libraries…
A well-organized homepage was a sign of personal and professional pride — even if it was nothing but a collection of fun gifs, or instructions on how to make the best potato guns, or homebrew research on gerbil genetics.
Dates didn’t matter all that much. Content lasted longer; there was less of it. Older content remained in view, too, because the dominant metaphor was table of contents rather than diary entry.
Everyone with a homepage became a de facto amateur reference librarian.
Movable Type didn’t just kill off blog customization.
It (and its competitors) actively killed other forms of web production.
Non-diarists — those folks with the old school librarian-style homepages — wanted those super-cool sidebar calendars just like the bloggers did. They were lured by the siren of easy use. So despite the fact that they weren’t writing daily diaries, they invested time and effort into migrating to this new platform.
They soon learned the chronostream was a decent servant, but a terrible master.
We no longer build sites. We generate streams.
All because building and maintaining a library is hard work.
Some idle googling on my part makes me think that the documentary might have been ‘Nice Guys Finish First’ by not-so-nice guy Richard Dawkins but I am more inclined to think it was a PBS documentary.
What I can say with much more confidence is that whatever documentary I happened to have watched combined with my subscription to The Whole Earth Review and primed me for a future interest in population biology that I pursued at university until I switched from a degree in biology to one in Geography and Environmental Science.
I have much more specific knowledge of when I first became interested in the theory of games.
Years ago I bought off the newsstand the September 2003 issue of Games Magazine despite the fact that the magazine was clearly more about puzzles than games. From that issue I discovered that the puzzles contained were all way above my ability but there was this one article that caught my attention: Metagaming 101 by W, Eric Martin. The article begins:
Games without change, like War and Chutes & Ladders, are games without choices; they incorporate change only in the smallest, most random ways. Other than choosing to play or quit, players of these games can do nothing more than follow fate’s fickle finger until a winner emerges. Only children have patience for such games; more experienced players yearn for a higher level of change and the choices that accompany it.
At the other end of the change continuum lies chaos, a swirling mass of rules and playing pieces that survive only on whim. The perfect example: Calvinball. Again, only children can tolerate such games; other players require a structured set of rules for change that they can refer to as needed.
But there are game designers who encourage rule-breaking via the concept of *meta-rules* — that is, rules with a game that change the rules of the game itself. With meta-rules, players can explore any point they wish on a change continuum simply by altering the rules of a game.
Since September 2003, I have read several books of the theory of games including A Theory of Fun for Game Design, The Art of Game Design: A Book of Lenses, Rules of Play: Game Design Fundamentals, How to Do Things with Videogames, What Video Games Have to Teach Us about Learning and Literacy, Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games, and Minds on Fire: How Role-Immersion Games Transform College.
Now, the reading of books does not make one an expert and I don’t consider myself an expert on the theory of games. I have approximate knowledge of the theory of games.
I sometimes joke that the true purpose of metrics within scholarly communication is to avoid reading.
This is an allusion to the common practice of many tenure and promotion committees whose members don’t read the research of the scholar who they are assessing. Instead, they tally up the number of prominent journals that the scholar has published in. The perceived quality of the journal is transmuted into the perceived quality of the work that the scholar has produced.
And so, as the smallest gesture against this state of affairs, I have decided to celebrate the reading of scholarship. Well, I’m going to try to read more of it.
This article proposes the “rules mutable game” as a metaphor for understanding the operation of copyright reform. Using the game of Calvinball (created by artist Bill Watterson in his long-running comic strip Calvin & Hobbes) as an illustrative device, and drawing on public choice theory’s account of how political change is effected by privileged interests, the article explores how the notion of a game in which players can modify the rules of the game while it is being played accounts for how users are often disadvantaged in copyright reform processes. The game metaphor also introduces a normative metric of fairness into the heart of the assessment of the copyright reform process from the standpoint of the user. The notion of a rules mutable game tells us something important about the kinds of stories we should be telling about copyright and copyright reform. The narrative power of the “fair play” norm embedded in the concept of the game can facilitate rhetoric which does not just doom users to dwell on their political losses, but empowers them to strategize for future victories.
I enjoyed the article but I would like to spend a little time on Tarantino’s assertion that a “game metaphor contains an inherent ethical vision.” While I take his point that most of us assume that all games are fair, I don’t think Calvinball is the game metaphor that one should first reach for, especially as law itself is already a rules-mutable system.
I would suggest instead to consider the concept of the infinite game.
Finite games are the familiar contests of everyday life; they are played in order to be won, which is when they end. But infinite games are more mysterious. Their object is not winning, but ensuring the continuation of play. The rules may change, the boundaries may change, even the participants may change—as long as the game is never allowed to come to an end.
The goal of the infinite game is to keep playing — to explore every way to play the game, to include all games, all possible players, to widen what is meant by playing, to spend all, to hoard nothing, to seed the universe with improbable plays, and if possible to surpass everything that has come before.
Nearly every game discussed thus far, no matter how successful on its own, owes a debt to Nomic, a rule-changing game that has spawned hundreds of variations over the past two decades.
Nomic was created in 1982 by Peter Suber, a professor of philosophy at Earlham College, as an appendix to his book The Paradox of Self-Amendment. This book explored the possible complications of a government system (such as that of the U.S.) in which a constitution includes rules for self-amendment. As Suber wrote, “While self-amendment appears to be an esoteric feature of law, capturing it in a game creates a remarkably complete microcosm of a functional legal system.
As created, Nomic consists of a two-tiered system of 16 “immutable” and 13 “mutable” rules. Players take turns proposing rule changes and new amendments, and earn points by voting and throwing a die. The first player to achive 100 points wins.
As dry as this sounds, games of Nomic can quickly explode in unimaginable directions. Perhaps the winner must now achieve 1,000 points — make that 1,000 points and the title “Supreme Overlord.” How does a player become titled? Propose a rule. On second thought, forget points; let’s give every rule a color and now someone wins by passing proposals that are colored green, red, and brown. “The ability of Nomic to change itself is a wonderful thing,” says Kevan Davis. “If the game ever starts to become boring, it change to whatever people think is less boring. If it’s going to fast, it can be slowed down; if it’s going to slowly, it can be speeded up. If people think it could use fewer dice and more rubber-band firing, then it gets fewer dice and more rubber-band firing.”
Here’s a book that I haven’t read: The Glass Bead Game by Herman Hesse. I am going to trust Wikipedia that the description of the book is accurate:
The Glass Bead Game takes place at an unspecified date centuries into the future. Hesse suggested that he imagined the book’s narrator writing around the start of the 25th century. The setting is a fictional province of central Europe called Castalia, which was reserved by political decision for the life of the mind; technology and economic life are kept to a strict minimum. Castalia is home to an austere order of intellectuals with a twofold mission: to run boarding schools for boys, and to cultivate and play the Glass Bead Game, whose exact nature remains elusive and whose devotees occupy a special school within Castalia known as Waldzell. The rules of the game are only alluded to—they are so sophisticated that they are not easy to imagine. Playing the game well requires years of hard study of music, mathematics, and cultural history. The game is essentially an abstract synthesis of all arts and sciences. It proceeds by players making deep connections between seemingly unrelated topics… The plot chronicles Knecht’s education as a youth, his decision to join the order, his mastery of the Game, and his advancement in the order’s hierarchy to eventually become Magister Ludi, the executive officer of the Castalian Order’s game administrators.
This is not the only time I have witnessed academia being understood as a game.
I read Scott Nicholson’s delightful Quest for Tenue: A Chose-Your-Own Adventure when I visited the Rare Books Room of the Stephen A. Schwarzman Building of the New York Public Library. Scott was one of many contributors to a book written in a single night called 100 ways to make history.
Chess is a deep and important human artifact, about which much of value has been written. But some philosophical research projects are more like working out the truths of chmess. Chmess is just like chess except that the king can move two squares in any direction, not one. I just invented it—though no doubt others have explored it in depth to see if it is worth playing. Probably it isn’t. It probably has other names. I didn’t bother investigating these questions because although they have true answers, they just aren’t worth my time and energy to discover. Or so I think. There are just as many a priori truths of chmess as there are of chess (an infinity), and they are just as hard to discover. And that means that if people actually did get involved in investigating the truths of chmess, they would make mistakes, which would need to be corrected, and this opens up a whole new field of a priori investigation, the higher-order truths of chmess, such as the following:
1. Jones’ (1989) proof that p is a truth of chmess is
flawed: he overlooks the following possibility …
2. Smith’s (2002) claim that Jones’ (1989) proof is
flawed presupposes the truth of Brown’s lemma
(1975), which has recently been challenged by
Dennett holds the playing of chmess is much more of a concern of philosophy than of other disciplines because:
Philosophy is an a priori discipline, like mathematics, or at least it has an a priori methodology at its core, and this fact cuts two ways. On the one hand, it excuses philosophers from spending tedious hours in the lab or the field, and from learning data-gathering techniques, statistical methods, geography, history, foreign languages …, empirical science, so they have plenty of time for honing their philosophical skills. On the other hand, as is often noted, you can make philosophy out of just about anything, and this is not always a blessing.
The following is an excerpt from my contribution to “Librarian Origin Story” in Schroeder, R., Deitering, AM, Stoddart, R., The Self as Subject: Autoethnographic Research into Identity, Culture, and Academic Librarianship, Association of College and Research Libraries, 2017.
In 2010, Jane McGonigal had a public conversation with Stewart Brand as part of an event Called The Long Conversation that was put on by The Long Now Foundation. Jane McGonigal started the conversation by bringing up Stewart Brand’s past experience with game design as part of the “New Games Movement” in the late 1970s. McGonigal asked Brand if the New Games movement was designed to “change the world” and Brand said yes, and told her of his game-design origin story
During the late 70s, he and friends were talking about how the Cold War was being played out by “rules” that would only result in bad endings for everyone and as such, the rules of the Cold War needed to change. And Brand thought about when he was a kid, when he and his friends changed the rules all the time. For example, kids would change the rules of the game of stickball that they were playing to accommodate any new kids who arrived to play. And so he and his friends started creating New Games for adults to explore and play in a world that they would rather live in.
Also in 2010, I was invited to be participant in the Evoke Summit held at the World Bank headquarters in Washington DC where I had the chance to meet and thank Jane McGonigal in person. The summit was a reward for the winners of the game who had come up with their winning proposals for social entrepreneurial projects and the two days were filled with activities geared to making those proposals a reality. One of the activities was to work on a short memorable tagline for one’s work that would distill the essence of who you are and what you want to achieve. Eventually I came up with this phrase for myself that I still feature on my professional portfolio: Changing the rules so more can win.
I know that this article had already made the rounds among some library people because I saw the piece being recommended and retweeted online. Chris Bourg, Director of Libraries at MIT chose not to read this article.
What did I learn from this exchange?
That some men think an individual choice to not read something w/ 0 women cited or quoted, noting that “I feel like maybe I’ve already heard men talk about scientific papers” is more sexist than not citing women + https://t.co/5EWR8vuQB0
Not to be presumptuous, but I like to think that I understand her reasons and her reaction. I say this because whenever I read a list – especially a list that promises some form of universal canon (oh, say for a manual for civilization) and there are few to no women or non-white people (or non-white women), more often than not that list registers to me as deficient.
Are Bret’s reading choices any of our business? Maybe not.
Although…. they might be if they are the same books that are being used to form the canon of DynamicLand.
Enough with my moral reproach, scolding and lecturing! Let me tell you about DynamicLand! Because gender representation of its bookshelf notwithstanding, I think it’s an absolutely remarkable endeavour.
I have found the idea of situated knowledge very useful as both a feminist and someone who has a degree in science . This work has helped me reconcile these two selves. I also have found the concept useful in some of my own thinking in librarianship (see post: The Observer or Seeing What You Mean)
I like this definition of Situated Knowledge from The Oxford Dictionary of Human Geography:
The idea that all forms of knowledge reflect the particular conditions in which they are produced, and at some level reflect the social identities and social locations of knowledge producers. The term was coined by historian of science Donna Haraway in Simians, Cyborgs, and Women: the Reinvention of Nature (1991) to question what she regarded as two dangerous myths in Western societies. The first was that it is possible to be epistemologically objective, to somehow be a neutral mouthpiece for the world’s truths if one adopts the ‘right’ method of inquiry. The second myth was that science and scientists are uniquely and exclusively equipped to be objective. Haraway was not advocating relativism. Instead, she was calling for all knowledge producers to take full responsibility for their epistemic claims rather than pretending that ‘reality’ has definitively grounded these claims.
We can and should take full responsibility for what we see and to recognize that what we see is what we choose to see.
It’s an admitted generalization but let me suggest that when academics compose a scholarly article they tend to introduce their work with a positioning statement that expresses the tradition of thought that their work extends. This might be done explicitly like Dr. Jhangiani did in his keynote or quietly through the careful choice of whose definitions were used to set the table for the work.
The adjective ‘scientific’ is not attributed to isolated texts that are able to oppose the opinion of the multitude by virtue of some mysterious faculty. A document becomes scientific when its claims stop being isolated and when the number of people engaged in publishing it are many and explicitly indicated in the text. When reading it, it is on the contrary the reader who becomes isolated. The careful marking of the allies’ presence is the first sign that the controversy is now heated enough to generate technical documents.
Latour B. Science in action: how to follow scientists and engineers through society. Cambridge: Harvard University Press; 2005. p. 33.
As someone who is only five months into my position of Scholarly Communications Librarian, I’ve been particularly mindful of how people talk about scholarship and the various measures and analytics we use to describe scholarly work.
I try to keep in mind that metrics are just a shadow of an object. You can make a shadow larger by trying to angle yourself in various ways towards the sun but you shouldn’t forget that when you make your shadow larger this way, the object casting the shadow does not change.
When we only think of metrics, we end up chasing shadows.
When faculty member assesses the quality of a peer’s work, they take the publication source as a measure of quality of that work. The unsaid rule is that each scholar, if they could, will always publish in the highest ranked journal in their field and any choice to publish anywhere else must be only because the work in question was not good enough. Any article published in a higher ranked journal is better than any article in a lower ranked journal.
And yet it’s easy to forget that the ranking behind ‘highly ranked journals’ are calculated using formulas that process the collected sum and speed of citation. In the end, journal ranking can also be re-considered as a measure of readership
Instead of positioning our work as ‘how to increase your h-index’, we should not forget that each citation is an author who we can also consider (perhaps charitably) a reader.
When I was the lead of Open Data Windsor Essex, we hosted a wonderful talk from Detroiter Alex Hill called Giving Data Empathy. What he reminded us in his talk was that behind each data point in his work was a person and that it was essential to remember how diminished that person is when they are reduced to a ‘count’.
Occasionally I have the opportunity to give high school students an introduction to research in a university context. During this introduction I show them an example of a ‘scholarly paper’ so they can take in the visual cues that might help them recognize other scholarly papers in their future.
After I point out the important features, I take the time to point out this piece of dynamic text on the page:
I know these citation counts come from CrossRef because I have an old screen capture that shows that the citation count section used to looks like this:
It is an accepted notion that the normative view of science expounded by Merton, provided a sociological interpretation of citation analysis in the late 1960s and 70s. According to his theory, a recognition of the previous work of scientists and of the originality of their work is an institutional form of awarding rewards for efforts. Citations are a means of providing such recognition and reward.
Latour’s views of citations are part of his research on the social construction of scientific facts and laboratories, science in the making as contrasted with ready made science, that is beliefs which are treated as scientific facts and are not questioned… In this phase, according to Latour, references in articles are among the resources that are under author’s command in their efforts at trying to “make their point firm” and to lend support to their knowledge claims. Other “allies” or resources are, for example, the editors of the journals which publish the articles, the referees of the journals, and the research funds which finance the pieces of research…
Latour’s theory has an advantage over that of Merton’s in that it can explain many of the findings made in the so-called citation content and context studies mentioned. These findings relate to the contents of citations, which are vastly different and vary from one situation to another; also the fact that the surrounding textual contexts in which they are used differ greatly. Such differences include whether citations are positive or negational, essential to the references text or perfunctory, whether they concern concepts or techniques or neither, whether they provide background reading, alert readers to new work, provide leads, etc.
The above passage is from page 29 of the article.
On page 31, you can find this passage:
The Latourian views have been largely ignored by the bibliographic community if their discussions about citations. The reasons why this is so are intriguing. An important conceptual reason is presumably the fact that in Latourian theory, the major of references is to support the knowledge claims of the citing author. This explanation does not legitimate major uses of citation indexing, its use as a performance measure – as in the use of citation counts which presupposes that references indicate a positive assessment of the cited document — or as an indication of the development of specialties – as in co-citation analysis.
What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”
Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)
The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.
Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself.
The article goes on to present two software driven alternatives to the PDF paper-mimicking practices of academia : notebooks from private company Mathematica and open source Jupyter Notebooks.
The more people believe in a statement and use it as an unquestioned fact, as a black box, the more it undergoes transformations. It may even undergo a process which Latour calls stylisation or erosion, but which Garfield called obliteration by information, that is, a scientist’s work becomes so generic tot he field, so integrated into its body of knowledge that people neglect to cite it explicitly.
One example will illustrate what I mean. La Pérouse travels through the Pacific for Louis XVI with the explicit mission of bringing back a better map. One day, landing on what he calls Sakhalin he meets with Chinese and tries to learn from them whether Sakhalin is an island or a peninsula. To his great surprise the Chinese understand geography quite well. An older man stands up and draws a map of his island on the sand with the scale and the details needed by La Pérouse. Another, who is younger, sees that the rising tide will soon erase the map and picks up one of La Pérouse’s notebooks to draw the map again with a pencil . . .
What are the differences between the savage geography and the civilized one? There is no need to bring a prescientific mind into the picture, nor any distinction between the close and open predicaments (Horton, 1977), nor primary and secondary theories (Horton, 1982), nor divisions between implicit and explicit, or concrete and abstract geography. The Chinese are quite able to think in terms of a map but also to talk about navigation on an equal footing with La Pérouse. Strictly speaking, the ability to draw and to visualize does not really make a difference either, since they all draw maps more or less based on the same principle of projection, first on sand, then on paper. So perhaps there is no difference after all and, geographies being equal, relativism is right. This, however, cannot be, because La Pérouse does something that is going to create an enormous difference between the Chinese and the European. What is, for the former, a drawing of no importance that the tide may erase, is for the latter the single object of his mission. What should be brought into the picture is how the picture is brought back. The Chinese does not have to keep track, since he can generate many maps at will, being born on this island and fated to die on it. La Pérouse is not going to stay for more than a night; he is not born here and will die far away. What is he doing, then? He is passing through all these places, in order to take something back to Versailles where many people expect his map to determine who was right and wrong about whether Sakhalin was an island, who will own this and that part of the world, and along which routes the next ships should sail.
Science requires a paper to be brought back from our endeavours.
Pérez told me stories of scientists who sacrificed their academic careers to build software, because building software counted for so little in their field: The creator of matplotlib, probably the most widely used tool for generating plots in scientific papers, was a postdoc in neuroscience but had to leave academia for industry. The same thing happened to the creator of NumPy, a now-ubiquitous tool for numerical computing. Pérez himself said, “I did get straight-out blunt comments from many, many colleagues, and from senior people and mentors who said: Stop doing this, you’re wasting your career, you’re wasting your talent.” Unabashedly, he said, they’d tell him to “go back to physics and mathematics and writing papers.”
What else is software but writing on sand?
I wanted to highlight Bret Victor’s to my fellow library workers for what I thought were several reasons. But the more I thought about it, the more reasons came to mind. But I don’t want to try your patience any longer so consider this a potential beginning of a short series of blog posts.
It’s only now that I am following up with an exploration of the potential of linked data through Wikidata that was first planted in my head by the incomparable Dan Scott from his presentation Wicked data with Wikidata from waaaay back in February of this year. Dan Scott did not steer me wrong. The potential power of Wikidata is very impressive.
The timeline above is a live query to Wikidata using this code. I could have added other search parameters, such as who are the wrestlers who have won the Tom Longboat Award or which women but that really doesn’t express the power of searching Wikidata. Wikidata – if the data is available – allows you to find painters who were the sons of painters and who are current women mayors of cities over a million residents and which authors wrote their first published work after the age of 45. It’s a fundamentally different type of searching that allows for language normalization and data reuse at scale.
It’s not often that I recommend watching a three hour video, but I am going to recommend watching this three hour video: A Gentle Introduction to Wikidata for Absolute Beginners [including non-techies!]. Presenter Asaf Bartov does an exceptional job of slowly building understanding and bringing the viewer with him at a pace that doesn’t feel rushed but, like a good three hour walk, leaves you amazed at what ground you have managed to cover when you are finished.
For those of you who are reluctant to watch a three hour video before you know that you can apply it to your library work, please consider getting to the 2018 OLA Superconference to see Dan Scott, Stacy Allison-Cassin, Monica Fazekas, and Carolyn Doi present on Wikimedia Edit-A-Thon: Get Your Library on Wikidata, Wikipedia, and Wikimedia Commons. Like me, you might not see the immediate reason to incorporate Wikidata into your work. But give it time.
When I first started working at the University of Windsor in July of 1999, the first floor of the Leddy Library was largely taken up by stacks of reference books. The largest collection of the library’s private and semi-private study carrels were on the second floor.
I think there must have been a variety of reasons behind why it was decided to place the study carrels on the second floor with the most obvious being that the library was designed to keep activities requiring concentration away from the distraction of people walking into the library and through its space.
But there’s another reason why such a separation existed which is suggested by the fact you can find an electrical outlet in every single study carrel on the second floor at even though the building came to be decades before laptops were available.
The answer is typewriters. Noisy, clattering typewriters.
While there is a rich conversation to be had about how some of the information literacy practices that separate research and writing as separate processes may have resulted from vestigial practice based on avoiding typewriter noise, I’m more interested in exploring what the affordance of laptops might mean to the shape of the spaces of and within the library today.
Our print reference collection is now in the basement of the West Building of the Leddy Library. Much of the space on the first floor of the Main Building is filled with banks of computer workstations that we used to call our Learning Commons.
But the perceived need for banks of workstations was waned in libraries. You don’t see as many seas of desktops in newly constructed library buildings. Now the entire library is perceived as a Learning Commons.
(If you are a subscriber to my newsletter: in the above video there’s a reference to that theory that I wrote about which suggests that the most comfortable seating is one when you feel protected from the back.)
Of course, this tendency to make all chairs alike is fueled by the demands of prefabrication and the supposed economies of scale. Designers have for years been creating “perfect chairs” — chairs that can be manufactured cheaply on mass. These chairs are made to be comfortable for the average person. And the institutions that buy chairs have been persuaded that buying these chairs in bulk meets all their needs.
In reviewing their talk for this blog post I re-read from their slides this quotation from A Pattern Language:
This is a fundamental view of the world. It says that when you build a thing, you cannot merely build that thing in isolation, but must also repair the world around it, and within it, so that the larger world at one place becomes more coherent, and more whole; and the thing which you make takes its place in the web of nature, as you make it.
I had forgotten that I read that particular phrase – repair the world – in that text.
About three months ago I started a special interest group at Hackforge called Repair the World which is “a monthly meet-up of those who want to learn more about the technologies and policies we can employ in Windsor-Essex to lead us towards a carbon-neutral future and to help our community cope with the effects of global warming”.
For our first meeting, I didn’t do much other than set a time and place, give one suggested reading for potential discussion, and help set up the chairs in the space in a circle for our discussion.
There’s a thoughtlessness in how people consider their audience that’s reflected in how they set up chairs. You can see that thoughtlessness immediately…
… At a conference, if you want to create a discussion group, you can set up the chairs in a circle, and you don’t need a table…
… Setting up chairs takes a lot of time, but anyone can do it. If you’re running a project and you want to get people involved, ask them to set up chairs. People like to set up chairs, and it’s easy work to delegate. It’s even easier to get people to put chairs away.
The article describes how Stanford researcher Patricia Chen improved her class’ performance in a test by sending out a 15 minute pre-survey designed to get them thinking about how they were going to prepare. Chen was applying a metacognition intervention in her teaching practice.
According to the Educational Endowment Foundation (EEF), which performs studies to try and close achievement gaps, metacognition is one of two of the most effective educational interventions it has tested. (Feedback is the other.) Students involved in programs designed to improve how they think about thinking accelerated their learning by an average of eight months’ worth of academic progress. The effect was greatest for low-achieving and older pupils.
This article reminded me that I had unfinished work to do.
And yet I had not yet properly explained the thinking behind my thinking behind this project. I had meant to write up my work but I found I kept putting it off. This is particularly ironic because one of the objects in the box was specifically chosen by me to help students deal with procrastination.
So let me turn the dial of my tomato timer and do the work.
As with Drexel, Dewey, and Otlet before him, [Vannevar] Bush argued that speed of recall was key. Without it, one’s external store of facts would be useless. When he called his invention “an enlarged intimate supplement” to memory, the crucial word wasn’t so much “enlarged” or “supplement”; books had long enlarged and supplemented our minds. No, it was “intimate”—the idea that the memex would be physically and cognitively proximal, in a nearly sensual fashion. That was a key to its power. Indeed, Bush suspected the comparative difficulties of using libraries is what had prevented them from being of widespread use to the public. “Even the modern great library,” he wrote, “is not generally consulted; it is nibbled at by a few.” To truly harness our external knowledge, we needed to bring it closer to our minds.
But the passages that came to mind from that were prompted by Andrew’s tweet, was the the book’s introduction to the science behind the learning technique of “spaced repetition” which is based on the Ebbinghaus curve of forgetting:
Machines can also remind us of facts precisely when we need reminding. If you’ll recall the Ebbinghaus curve of forgetting from the second chapter, Ebbinghaus found that we forget things in a predictable pattern: More than half our facts are gone in an hour, about two thirds are gone within a day, and within a month we’re down to about 20 percent. Ebbinghaus and his followers theorized that this process could work in reverse. If you reviewed a fact one day after you first encountered it, you’d fight the curve of loss. This process is called “spaced repetition,” and experiments and anecdotes suggest it can work. It explains why students who cram for a test never retain much; the material dissolves because they never repeat it. But though spaced repetition is clever and effective, it has never caught on widely, because ironically, the technique relies on our frail human memories. How would you remember to review something the next day? Then a few days later, a week, and three months?
Machines, however, are superb at following these rote schedules. In the last decade, software programmers began selling tools intended to let you feed in facts, which the computer then reminds you to review on a reverse Ebbinghaus curve. Use of this software has remained a subculture, mostly by people seeking to learn a foreign language, though devout adherents use it to retain everything from recipes to poetry…
As librarians, we don’t concern ourselves with the memory work of our readers. Our focus is on the research process of scholarship and not on the learning and recall of said scholarship. And yet arguably more student time is spent studying in the library than researching within it.
For many of our students much of their time is spent in the learning of material. And despite the fact that some of our most prestigious students need to memorize content (there is a good chance that your doctor, as a medical student, used flash cards or memory palaces to learn the biomedical foundation of their care) educators and librarians frequently choose to focus their teaching on ‘higher level learning’ instead.
Appealing though it might be to offload the responsibility for teaching our students basic knowledge to their elementary school teachers or to Google, the research of cognitive psychologists who study learning and the basic study habits of most students suggest that we cannot do this. One of our first and most important tasks as teachers is to help students develop a rich body of knowledge in our content areas– without doing so, we handicap considerably their ability to engage in cognitive activities like thinking and evaluating and creating. As cognitive psychologist Daniel Willlingham argued, you can’t think creatively about information unless you have information in your head that you can think about. “Research from cognitive science has shown,” he explained, “that the sorts of skills that teachers want for their students — such as the ability to analyze and think critically — require extensive factual knowledge” (Willingham 2009, p. 25). We have to know things, in other words, to think critically about them. Without any information readily available to us in our brains, we tend to see new facts (from our Google searches) in isolated, noncontextual ways that lead to shallow thinking. Facts are related to other facts, and the more of those relationships we can see, the more we will prove capable of critical analysis and creative thinking. Students who don’t bother to memorize anything will never get much beyond skating the surface of a topic.
The above passage comes from James M. Lang, the author of Small Teaching: Everyday Lessons from the Science of Learning, which I found an extraordinarily helpful book. I included a passage from his “Small Teaching” in the “Teach me knowledge: Why study the facts” zine that I included in the Learning Objects box.
I also included a separate zine dedicated specifically to the topic of spaced repetition. To accompany the zine, I included a small box of index cards in which the cards explained how to create a ‘Leitner Flashcard Game’ for one’s own learning goal.
It’s one of my theories that when people give you advice, they’re really just talking to themselves in the past. ~Mark Epstein
The zines that accompany the Rubber Duck in the Learning Objects box are really for my past self.
Do you study by reading and re-reading your notes to yourself silently? Stop! I know it feels good, in a monkish, masochistic, pain equals progress sort of way to beat your brains against a book hour after hour, but it’s also a terribly inefficient way to review. Instead, lecture to an imaginary class, out-loud, about the main topics, without reading off your notes. If you can state an idea once, in complete sentences, out-loud, it will stick. You don’t need to re-read it a dozen times. If you can’t capture it out-loud then you don’t understand it yet. Go back. Review. Then try again.
My cataloging professor once quipped in class, “There are two kinds of people: the people who know where things are and the people who know where things should be.”
I used that line on my daughter this morning as she looked for her dress shoes because it was picture day at school. She loves her dress shoes and will wear them sometimes when she plays in her bedroom or in the living room or the TV room and so one can never quite be sure where her dress shoes are at a given time.
After I told her my professor’s passed down wisdom, I gave her my addendum: “Whenever you share a space with other people you need to have places where everyone knows where particular things belong because when there are other people involved you can never be sure there things are.” I then walked with her to the space under the stairs where I try to keep all our shoes and we indeed found her prized possession.
I have been thinking about the necessity of communicating ‘where things should be’ in shared space ever since I read Peter Rukavina’s post on the Sachs-Neistat approach to labeling and then watched the video that inspired his post, Ten Bullets by sculptor Tom Sachs. Ten Bullets is part of series of movies designed for mandatory viewing by members of the Tom Sachs studio team called “Working to Code.” Like Peter, this video has also inspired me.
I am on the Board of Directors of a small, community-focused hackerspace called Hackforge. We have occasional ‘organize the space’ days in which the people who know what things are and where they should be are present and able to guide the other members who don’t have this shared understanding.
It is difficult to clean up a work space if you don’t recognize what an object *is* especially as many vernacular organizational systems group things together by function when they don’t group things together by type (just like the grocery store where baking supplies are clustered together in one aisle whereas in another you can find a collection of dried beans).
This is why one of our members designed an inventory system for our space that associates a wiki entry with an ID which is in turn associated with a QR code on a sticker that can be applied to a tool or bag of parts. So, if you are unsure what an object is, you can scan the QR code to read its wiki entry which, ideally, would also tell you where it belongs in the space. Unfortunately the whole system is still in an ‘idealized’ state as we haven’t done the work to inventory and describe most of the objects in our space.
But now I know thanks to Tom’s video that if I don’t where an object should go, I should A. B. K. – always be knolling.
My own place of work is another shared space of sorts. But as I am no Tom Sachs I am unable to unilaterally decide how we should label our materials within the Leddy Library. So I’ve decided to make small changes under my liaison responsibilities to see if they can make any noticeable differences that I can then share with my peers.
I work in an academic library and in my place of work we do not have a practice of labeling our bookshelf collections beyond providing call number ranges.
I think it’s safe to say that very few people beyond library staff know that call numbers encode the book’s subject and authorship.
The only indication that the Leddy Library is organized by subject are the Find it with LC posters that you can find throughout the library.
One thing I did some time ago was try to provide a better label to myself on my staff profile page to give some idea of how I might be of use.