If the map becomes the territory then we will be lost

That which computation sets out to map and model it eventually takes over. Google sets out to index all human knowledge and becomes the source and the arbiter of that knowledge: it became what people think. Facebook set out to map the connections between people – the social graph – and became the platform for those connections, irrevocably reshaping societal relationships. Like an air control system mistaking a flock of birds for a fleet of bombers, software is unable to distinguish between the model of the world and reality – and, once conditioned, neither are we.

James Bridle, New Dark Age, p.39.

I am here to bring your attention to two developments that are making me worried:

  1. The Social Graph of Scholarly Communications is becoming more tightly bound into institutional metrics that have an increasing influence on institutional funding
  2. The publishers of the Social Graph of Scholarship are beginning to enclose the Social Graph, excluding the infrastructure of libraries and other independent, non-profit organizations

Normally, I would try to separate these ideas into two dedicated posts but in this case, I want to bring them together in writing because if these two trends converge together, things will become very bad, very quickly.

Let me start with the first trend:

1. The social graph that binds

When I am asked to explain how to achieve a particular result within scholarly communication, more often than not, I find myself describing four potential options:

  1. a workflow of Elsevier products (BePress, SSRN, Scopus, SciVal, Pure)
  2. a workflow of Clarivate products (Web of Science, InCites, Endnote, Journal Citation Reports)
  3. a workflow of Springer-Nature products (Dimensions, Figshare, Altmetrics)
  4. a DIY workflow from a variety of independent sources (the library’s institutional repository, ORCiD, Open Science Framework)

Workflow is the new content.

That line – workflow is the new content – is from Lorcan Dempsey and it was brought to my attention by Roger Schonfeld. For Open Access week, I gave a presentation on this idea of being mindful of workflow and tool choices in a presentation entitled, A Field Guide to Scholarly Communications Ecosystems. The slides are below.

(My apologies for not sharing the text that goes with the slides. Since January of this year, I have been the Head of the Information Services Department at my place of work. In addition to this responsibility, much of my time this year has been spent covering the work of colleagues currently on leave. Finding time to write has been a challenge.)

In Ontario, each institution of higher education must submit a ‘Strategic Mandate Agreement‘ with its largest funding body, the provincial government. Universities are currently in the second iteration of these types of agreements and are preparing for the third round. These agreements are considered fraught by many, including Marc Spooner, a professor in the faculty of education at the University of Regina, who wrote the following in an opinion piece in University Affairs:

The agreement is designed to collect quantitative information grouped under the following broad themes: a) student experience; b) innovation in teaching and learning excellence; c) access and equity; d) research excellence and impact; and e) innovation, economic development and community engagement. The collection of system-wide data is not a bad idea on its own. For example, looking at metrics like student retention data between years one and two, proportion of expenditures on student services, graduation rates, data on the number and proportion of Indigenous students, first-generation students and students with disabilities, and graduate employment rates, all can be helpful.

Where the plan goes off-track is with the system-wide metrics used to assess research excellence and impact: 1) Tri-council funding (total and share by council); 2) number of papers (total and per full-time faculty); and 3) number of citations (total and per paper). A tabulation of our worth as scholars is simply not possible through narrowly conceived, quantified metrics that merely total up research grants, peer-reviewed publications and citations. Such an approach perversely de-incentivises time-consuming research, community-based research, Indigenous research, innovative lines of inquiry and alternative forms of scholarship. It effectively displaces research that “matters” with research that “counts” and puts a premium on doing simply what counts as fast as possible…

Even more alarming – and what is hardly being discussed – is how these damaging and limited terms of reference will be amplified when the agreement enters its third phase, SMA3, from 2020 to 2023. In this third phase, the actual funding allotments to universities will be tied to their performance on the agreement’s extremely deficient metrics.

Ontario university strategic mandate agreements: a train wreck waiting to happen”, Marc Spooner, University Affairs, Jan 23 2018

The measure by which citation counts for each institution are going to be assessed have already been decided. The Ontario government has already stated that it is going to use Elsevier’s Scopus (although I presume they really meant SciVal).

What could possibly go wrong? To answer that question, let’s look at the second trend: enclosure.

2. Enclosing the social graph

The law locks up the man or woman
Who steals the goose from off the common
But leaves the greater villain loose
Who steals the common from off the goose.

Anonymous, “The Goose and the Commons”

As someone who spends a great deal of time ensuring that the scholarship of the University of Windsor’s Institutional Repository meets the stringent restrictions set by publishers, it’s hard not to feel a slap in the face when reading Springer Nature Syndicates Content to ResearchGate.

ResearchGate has been accused of “massive infringement of peer-reviewed, published journal articles.”

They say that the networking site is illegally obtaining and distributing research papers protected by copyright law. They also suggest that the site is deliberately tricking researchers into uploading protected content.

Who is the they of the above quote? Why they is the publishers, the American Chemical Society and Elsevier.

It is not uncommon to find selective enforcement of copyright within the scholarly communication landscape. Publishers have cast a blind eye to the copyright infringement of ResearchGate and Academia.edu for years, while targeting course reserve systems set up by libraries.

Any commercial system that is part of the scholarly communication workflow can be acquired for strategic purposes.

As I noted in my contribution to Grinding the Gears: Academic Librarians and Civic Responsibility, sometimes companies purchase competing companies as a means to control their development and even to shut their products down.

One of the least understood and thus least appreciated functions of calibre is that it uses the Open Publication Distribution System (OPDS) standard (opds-spec.org) to allow one to easily share e-books (at least those without Digital Rights Management software installed) to e-readers on the same local network. For example, on my iPod Touch, I have the e-reader program Stanza (itunes.apple.com/us/app/stanza/id284956128) installed and from it, I can access the calibre library catalogue on my laptop from within my house, since both are on the same local WiFi network. And so can anyone else in my family from their own mobile device. It’s worth noting that Stanza was bought by Amazon in 2011 and according to those who follow the digital e-reader market, it appears that Amazon may have done so solely for the purpose of stunting its development and sunsetting the software (Hoffelder,2013)

Grinding the Gears: Academic Librarians and Civic Responsibility” Lisa Sloniowski, Mita Williams, Patti Ryan, Urban Library Journal. Vol. 19. No.1 (2013). Special Issue: Libraries, Information and the Right to the city: Proceedings of the 2013 LACUNY Institute.

And sometimes companies acquire products to provide a tightly integrated suite of services and seamless workflow.

If individual researchers determine that seamlessness is valuable to them, will they in turn license access to a complete end-to-end service for themselves or on behalf of their lab?

And, indeed, whatever model the university may select, if individual researchers determine that seamlessness is valuable to them, will they in turn license access to a complete end-to-end service for themselves or on behalf of their lab?  So, the university’s efforts to ensure a more competitive overall marketplace through componentization may ultimately serve only to marginalize it.

“Big Deal: Should Universities Outsource More Core Research Infrastructure?”, Roger C. Schonfeld, January 4, 2018

Elsevier bought BePress in August of 2017. In May of 2016, Elsevier acquired SSRN. Bepress and SSRN are currently exploring further “potential areas of integration, including developing a single upload experience, conducting expanded research into rankings and download integration, as well as sending content from Digital Commons to SSRN.

Now, let’s get to the recent development that has me nervous.

10.2 Requirements for Plan S compliant Open Access repositories

The repository must be registered in the Directory of Open Access Repositories (OpenDOAR) or in the process of being registered.

In addition, the following criteria for repositories are required:

  • Automated manuscript ingest facility
  • Full text stored in XML in JATS standard (or equivalent)
  • Quality assured metadata in standard interoperable format, including information on the DOI of the original publication, on the version deposited (AAM/VoR), on the open access status and the license of the deposited version. The metadata must fulfil the same quality criteria as Open Access journals and platforms (see above). In particular, metadata must include complete and reliable information on funding provided by cOAlition S funders. OpenAIRE compliance is strongly recommended.
  • Open API to allow others (including machines) to access the content
  • QA process to integrate full text with core abstract and indexing services (for example PubMed)
  • Continuous availability

Automated manuscript ingest facility probably gives me the most pause. Automated means a direct pipeline from publisher to institutional repository that could be based on a publishers’ interpretation of fair use/fair dealing and we don’t know what the ramifications of that decision making might be. I’m feeling trepidation because I believe we are already experiencing the effects of a tighter integration between manuscript services and the IR.

Many publishers – including Wiley, Taylor and Francis, IEEE, and IOP – already use a third party manuscript service called ScholarOne. ScholarOne integrates the iThenticate service which produces reports of what percentage of a manuscript has already been published. Journal editors have the option to set what extent a paper can make use of a researcher’s prior work, including their thesis. Manuscripts that exceed these set thresholds can be automatically rejected without human interjection from the editor. We are only just starting to understand how this workflow is going to impact the willingness of young scholars to make their theses and dissertations open access.

It is also worth noting that ScholarOne is owned by Clarivate Analytics, the parent company of Web of Science, Incites, Journal Citation Reports, and others. One on hand, having a non-publisher act as a third party to the publishing process is probably ideal since it reduces the chances of a conflict of interest. On the other hand, I’m very unhappy with Clarivate Analytics’s product called Kopernio which provides “fast, one-click access to millions of research papers” and “integrates with Web of Science, Google Scholar, PubMed” and 20,000 other sites” (including ResearchGate and Academia.edu natch). There are prominent links to Kopernio within Web of Science that essentially positions the product as a direct competitor to a university library’s link resolver service and in doing so, removes the library from the scholarly workflow – other than the fact that the library pays for the product’s placement.

The winner takes it all

The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game.

Childhood’s End, Edge, George Dyson [1.1.19]

Blogging again and Never again

It appears that I haven’t written a single post on this blog since July of 2018. Perhaps it is all the talk of resolutions around me but I sincerely would like to write more in this space in 2019. And the best way to do that is to just start.

In December of last year I listened to Episode 7 of Anil Dash’s Function Podcast: Fn 7: Behind the Rising Labor Movement in Tech.

This week on Function, we take a look at the rising labor movement in tech by hearing from those whose advocacy was instrumental in setting the foundation for what we see today around the dissent from tech workers.

Anil talks to Leigh Honeywell, CEO and founder of Tall Poppy and creator of the Never Again pledge, about how her early work, along with others, helped galvanize tech workers to connect the dots between different issues in tech.

Fn 7: Behind the Rising Labor Movement in Tech

I thought I was familiar with most of Leigh’s work but I realized that wasn’t the case because somehow her involvement with the Never Again pledge escaped my attention.

Here’s the pledge’s Introduction:

We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable.

We have educated ourselves on the history of threats like these, and on the roles that technology and technologists played in carrying them out. We see how IBM collaborated to digitize and streamline the Holocaust, contributing to the deaths of six million Jews and millions of others. We recall the internment of Japanese Americans during the Second World War. We recognize that mass deportations precipitated the very atrocity the word genocide was created to describe: the murder of 1.5 million Armenians in Turkey. We acknowledge that genocides are not merely a relic of the distant past—among others, Tutsi Rwandans and Bosnian Muslims have been victims in our lifetimes.

Today we stand together to say: not on our watch, and never again.

“Our pledge”, Never Again.

The episode reminded me that while I am not an employee in the United States who is directly complicit with the facilitation of deportation, as a Canadian academic librarian, I am not entirely free from some degree of complicity as I am employed at a University that subscribes to WESTLAW .

The Intercept is reporting on Thomson Reuters response to Privacy International’s letter to TRI CEO Jim Smith expressing the watchdog group’s “concern” over the company’s involvement with ICE. According to The Intercept article “Thomson Reuters Special Services sells ICE ‘a continuous monitoring and alert service that provides real-time jail booking data to support the identification and location of aliens’ as part of a $6.7 million contract, and West Publishing, another subsidiary, provides ICE’s “Detention Compliance and Removals” office with access to a vast license-plate scanning database, along with agency access to the Consolidated Lead Evaluation and Reporting, or CLEAR, system.” The two contracts together are worth $26 million. The article observes that “the company is ready to defend at lease one of those contracts while remaining silent on the rest.”

“Thomson Reuters defends $26 million contracts with ICE”
by Joe Hodnicki (Law Librarian Blog) on June 28, 2018

I also work at a library that subscribes to products that are provided by Elsevier and whose parent company is the RELX Group.

In 2015, Reed Elsevier rebranded itself as RELX and moved further away from traditional academic and professional publishing. This year [2018], the company purchased ThreatMetrix, a cybersecurity company that specializes in tracking and authenticating people’s online activities, which even tech reporters saw as a notable departure from the company’s prior academic publishing role.

Surveillance and Legal Research Providers: What You Need to Know“, Sarah Lamdan, Medium, July 6, 2018.

Welcome to 2019. There is work to do and it’s time to start.

What ruined the web was the lack of good library software

In some libraries, there are sometimes particular collections in which the objects are organized by the order in which they were acquired (at my place of work, our relatively small collection of movies on DVD are ordered this way). This practice makes it easy for a person to quickly see what has been most recently been received or what’s been newly published. Such collections are easy to start and maintain as you just have to sort them by ‘acquisition number’.

But you would be hard pressed to find a good reason to organize a large amount of material this way. Eventually a collection grows too large to browse in its entirety and you have people telling you that they would rather browse the collection by author name, or by publication year, or by subject. But to allow for this means organizing the collection and let me tell you my non-library staff friends, such organization is a lot of bother — it takes time, thought and consistent diligence.

Which is why we are where we are with today’s state of the web.

Early homepages were like little libraries…

A well-organized homepage was a sign of personal and professional pride — even if it was nothing but a collection of fun gifs, or instructions on how to make the best potato guns, or homebrew research on gerbil genetics.

Dates didn’t matter all that much. Content lasted longer; there was less of it. Older content remained in view, too, because the dominant metaphor was table of contents rather than diary entry.

Everyone with a homepage became a de facto amateur reference librarian.

Obviously, it didn’t last.

The above is from a short essay by Amy Hoy about Moveable Type – one of the first blogging platforms – and how MT and other blogging platforms that facilitated easy chronological ordering of blog posts may be have been the true culprit that ruined the web.

Movable Type didn’t just kill off blog customization.

It (and its competitors) actively killed other forms of web production.

Non-diarists — those folks with the old school librarian-style homepages — wanted those super-cool sidebar calendars just like the bloggers did. They were lured by the siren of easy use. So despite the fact that they weren’t writing daily diaries, they invested time and effort into migrating to this new platform.

They soon learned the chronostream was a decent servant, but a terrible master.

We no longer build sites. We generate streams.

All because building and maintaining a library is hard work.

[The above was first shared in my weekly newsletter University of Winds which provides three links to wonderful and thought-provoking things in the world every Saturday morning].

OK ScholComm – time for some game theory

I have approximate knowledge of when I was first introduced to game theory. It was the late 1980s and I was in a classroom and we were shown a documentary featured The Prisoner’s Dilemma (which is best understood through Nicky Case’s The Evolution of Trust).

Some idle googling on my part makes me think that the documentary might have been ‘Nice Guys Finish First’ by not-so-nice guy Richard Dawkins but I am more inclined to think it was a PBS documentary.

What I can say with much more confidence is that whatever documentary I happened to have watched combined with my subscription to The Whole Earth Review and primed me for a future interest in population biology that I pursued at university until I switched from a degree in biology to one in Geography and Environmental Science.

I have much more specific knowledge of when I first became interested in the theory of games.

Years ago I bought off the newsstand the September 2003 issue of Games Magazine despite the fact that the magazine was clearly more about puzzles than games. From that issue I discovered that the puzzles contained were all way above my ability but there was this one article that caught my attention: Metagaming 101 by W, Eric Martin. The article begins:

Games without change, like War and Chutes & Ladders, are games without choices; they incorporate change only in the smallest, most random ways. Other than choosing to play or quit, players of these games can do nothing more than follow fate’s fickle finger until a winner emerges. Only children have patience for such games; more experienced players yearn for a higher level of change and the choices that accompany it.

At the other end of the change continuum lies chaos, a swirling mass of rules and playing pieces that survive only on whim. The perfect example: Calvinball. Again, only children can tolerate such games; other players require a structured set of rules for change that they can refer to as needed.

But there are game designers who encourage rule-breaking via the concept of *meta-rules* — that is, rules with a game that change the rules of the game itself. With meta-rules, players can explore any point they wish on a change continuum simply by altering the rules of a game.

from Metagaming 101

Game theory is not the same as the theory of games. Game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.” This means that you can choose to employ a variety of different types of game theory in certain games.

Since September 2003, I have read several books of the theory of games including A Theory of Fun for Game Design, The Art of Game Design: A Book of Lenses, Rules of Play: Game Design Fundamentals, How to Do Things with Videogames, What Video Games Have to Teach Us about Learning and Literacy, Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games, and Minds on Fire: How Role-Immersion Games Transform College.

Now, the reading of books does not make one an expert and I don’t consider myself an expert on the theory of games. I have approximate knowledge of the theory of games.

 

 

I sometimes joke that the true purpose of metrics within scholarly communication is to avoid reading.

This is an allusion to the common practice of many tenure and promotion committees whose members don’t read the research of the scholar who they are assessing. Instead, they tally up the number of prominent journals that the scholar has published in. The perceived quality of the journal is transmuted into the perceived quality of the work that the scholar has produced.

And so, as the smallest gesture against this state of affairs, I have decided to celebrate the reading of scholarship. Well, I’m going to try to read more of it.

Last week I read Calvinball: User’s Rights, Public Choice Theory and Rules Mutable Games by Bob Tarantino in The Windsor Yearbook of Access to Justice. Its abstract:

This article proposes the “rules mutable game” as a metaphor for understanding the operation of copyright reform. Using the game of Calvinball (created by artist  Bill Watterson in his long-running comic strip Calvin & Hobbes) as an illustrative device, and drawing on public choice theory’s account of how political change is effected by privileged interests, the article explores how the notion of a game in which players can modify the rules of the game while it is being played accounts for how users are often disadvantaged in copyright reform processes. The game metaphor also introduces a normative metric of fairness into the heart of the assessment of the copyright reform process from the standpoint of the user. The notion of a rules mutable game tells us something important about the kinds of stories we should be telling about copyright and copyright reform. The narrative power of the “fair play” norm embedded in the concept of the game can facilitate rhetoric which does not just doom users to dwell on their political losses, but empowers them to strategize for future victories.

I enjoyed the article but I would like to spend a little time on Tarantino’s assertion that a “game metaphor contains an inherent ethical vision.” While I take his point that most of us assume that all games are fair, I don’t think Calvinball is the game metaphor that one should first reach for, especially as law itself is already a rules-mutable system.

I would suggest instead to consider the concept of the infinite game.

Here’s the blurb from Finite and Infinite Games

Finite games are the familiar contests of everyday life; they are played in order to be won, which is when they end. But infinite games are more mysterious. Their object is not winning, but ensuring the continuation of play. The rules may change, the boundaries may change, even the participants may change—as long as the game is never allowed to come to an end.

From Kevin Kelly:

The goal of the infinite game is to keep playing — to explore every way to play the game, to include all games, all possible players, to widen what is meant by playing, to spend all, to hoard nothing, to seed the universe with improbable plays, and if possible to surpass everything that has come before.

Games rules, incidentally, are uncopyrightable and this holds true for video games rules as well.

 

From Metagaming 101:

THE KING OF CHANGE

Nearly every game discussed thus far, no matter how successful on its own, owes a debt to Nomic, a rule-changing game that has spawned hundreds of variations over the past two decades.

Nomic was created in 1982 by Peter Suber, a professor of philosophy at Earlham College, as an appendix to his book The Paradox of Self-Amendment. This book explored the possible complications of a government system (such as that of the U.S.) in which a constitution includes rules for self-amendment. As Suber wrote, “While self-amendment appears to be an esoteric feature of law, capturing it in a game creates a remarkably complete microcosm of a functional legal system.

As created, Nomic consists of a two-tiered system of 16 “immutable” and 13 “mutable” rules. Players take turns proposing rule changes and new amendments, and earn points by voting and throwing a die. The first player to achive 100 points wins.

As dry as this sounds, games of Nomic can quickly explode in unimaginable directions. Perhaps the winner must now achieve 1,000 points — make that 1,000 points and the title “Supreme Overlord.” How does a player become titled? Propose a rule. On second thought, forget points; let’s give every rule a color and now someone wins by passing proposals that are colored green, red, and brown. “The ability of Nomic to change itself is a wonderful thing,” says Kevan Davis. “If the game ever starts to become boring, it change to whatever people think is less boring. If it’s going to fast, it can be slowed down; if it’s going to slowly, it can be speeded up. If people think it could use fewer dice and more rubber-band firing, then it gets fewer dice and more rubber-band firing.”

Is it coincidence that the King of Change is the same Peter Suber who helped define and promote Open Access in academia?

 

 

Here’s a book that I haven’t read: The Glass Bead Game by Herman Hesse. I am going to trust Wikipedia that the description of the book is accurate:

The Glass Bead Game takes place at an unspecified date centuries into the future. Hesse suggested that he imagined the book’s narrator writing around the start of the 25th century. The setting is a fictional province of central Europe called Castalia, which was reserved by political decision for the life of the mind; technology and economic life are kept to a strict minimum. Castalia is home to an austere order of intellectuals with a twofold mission: to run boarding schools for boys, and to cultivate and play the Glass Bead Game, whose exact nature remains elusive and whose devotees occupy a special school within Castalia known as Waldzell. The rules of the game are only alluded to—they are so sophisticated that they are not easy to imagine. Playing the game well requires years of hard study of music, mathematics, and cultural history. The game is essentially an abstract synthesis of all arts and sciences. It proceeds by players making deep connections between seemingly unrelated topics… The plot chronicles Knecht’s education as a youth, his decision to join the order, his mastery of the Game, and his advancement in the order’s hierarchy to eventually become Magister Ludi, the executive officer of the Castalian Order’s game administrators.

This is not the only time I have witnessed academia being understood as a game.

I read Scott Nicholson’s delightful Quest for Tenue: A Chose-Your-Own Adventure when I visited the Rare Books Room of the Stephen A. Schwarzman Building of the New York Public Library. Scott was one of many contributors to a book written in a single night called 100 ways to make history.

And earlier this week I learned from this video about the concept of chmess which was coined by philosopher Daniel C. Dennett in the article, Higher-order truths about chmess [pdf]

What is chmess you might ask?

Chess is a deep and important human artifact, about which much of value has been written. But some philosophical research projects are more like working out the truths of chmess. Chmess is just like chess except that the king can move two squares in any direction, not one. I just invented it—though no doubt others have explored it in depth to see if it is worth playing. Probably it isn’t. It probably has other names. I didn’t bother investigating these questions because although they have true answers, they just aren’t worth my time and energy to discover. Or so I think. There are just as many a priori truths of chmess as there are of chess (an infinity), and they are just as hard to discover. And that means that if people actually did get involved in investigating the truths of chmess, they would make mistakes, which would need to be corrected, and this opens up a whole new field of a priori investigation, the higher-order truths of chmess, such as the following:
1. Jones’ (1989) proof that p is a truth of chmess is
flawed: he overlooks the following possibility …
2. Smith’s (2002) claim that Jones’ (1989) proof is
flawed presupposes the truth of Brown’s lemma
(1975), which has recently been challenged by
Garfinkle (2002)

Dennett holds the playing of chmess is much more of a concern of philosophy than of other disciplines because:

Philosophy is an a priori discipline, like mathematics, or at least it has an a priori methodology at its core, and this fact cuts two ways. On the one hand, it excuses philosophers from spending tedious hours in the lab or the field, and from learning data-gathering techniques, statistical methods, geography, history, foreign languages …, empirical science, so they have plenty of time for honing their philosophical skills. On the other hand, as is often noted, you can make philosophy out of just about anything, and this is not always a blessing.

Knowing this, is it surprising that philosophy journals have some of the lowest acceptance rates in all of scholarship? (ht Ryan Reiger).

There is another written work that really got me thinking about the University not necessarily as a game but as an institution of productive leisure but I cannot cite it or quote from it.

The reasons for this might have something to do with citation counts.

 

 

Please allow me to make a sweeping generalization: reputation is the coin of the realm of academia. Not citation counts.

And yet there are many software platforms that are currently being sold that presents the number of citations as some sort of scoring system.

Who has the high score at your institution? Just check Google Scholar.

I think we should be more mindful of the types of behaviours we are implicitly and explicitly encouraging by choosing to rank scholars, research labs, and institutions by number of citations, alone.

If we want to develop better scoring systems, I think we could learn from game designers:

 

 

The following is an excerpt from my contribution to “Librarian Origin Story” in Schroeder, R., Deitering, AM, Stoddart, R., The Self as Subject: Autoethnographic Research into Identity, Culture, and Academic Librarianship, Association of College and Research Libraries, 2017.

In 2010, Jane McGonigal had a public conversation with Stewart Brand as part of an event Called The Long Conversation that was put on by The Long Now Foundation. Jane McGonigal started the conversation by bringing up Stewart Brand’s past experience with game design as part of the “New Games Movement” in the late 1970s. McGonigal asked Brand if the New Games movement was designed to “change the world” and Brand said yes, and told her of his game-design origin story

During the late 70s, he and friends were talking about how the Cold War was being played out by “rules” that would only result in bad endings for everyone and as such, the rules of the Cold War needed to change. And Brand thought about when he was a kid, when he and his friends changed the rules all the time. For example, kids would change the rules of the game of stickball  that they were playing to accommodate any new kids who arrived to play. And so he and his friends started creating New Games for adults to explore and play in a world that they would rather live in.

Also in 2010, I was invited to be participant in the Evoke Summit held at the World Bank headquarters in Washington DC where I had the chance to meet and thank Jane McGonigal in person. The summit was a reward for the winners of the game who had come up with their winning proposals for social entrepreneurial projects and the two days were filled with activities geared to making those proposals a reality.  One of the activities was to work on a short memorable tagline for one’s work that would distill the essence of who you are and what you want to achieve. Eventually I came up with this phrase for myself that I still feature on my professional portfolio: Changing the rules so more can win.

Bret Victor’s Bookshelf

A couple of posts ago, I wrote a somewhat unorthodox introduction to the work of Bret Victor. In it, I brought the reader’s attention to a recent article from The Atlantic called The Scientific Paper is Obsolete.

 

I know that this article had already made the rounds among some library people because I saw the piece being recommended and retweeted online. Chris Bourg, Director of Libraries at MIT chose not to read this article.

Not to be presumptuous, but I like to think that I understand her reasons and her reaction. I say this because whenever I read a list – especially a list that promises some form of universal canon (oh, say for a manual for civilization) and there are few to no women or non-white people (or non-white women), more often than not that list registers to me as deficient.

You cannot be well-read until you read the work of women.

So what are we to make of the gender balance of the works on Bret Victor’s esteemed bookshelf?

Are Bret’s reading choices any of our business? Maybe not.

Although…. they might be if they are the same books that are being used to form the canon of DynamicLand.

 

Enough with my moral reproach, scolding and lecturing! Let me tell you about DynamicLand! Because gender representation of its bookshelf notwithstanding, I think it’s an absolutely remarkable endeavour.

Seriously, go to the Dynamicland website, take it in, and consider it. Scroll through the videos on their twitter stream. And then, when you can, go deeper and watch Bret Victor’s videos, The Humane Representation of Thought and Seeing Spaces.

Speaking of seeing spaces

If I could offer one additional book for the shelves of Bret Victor and Dynamicland, it would be The science studies reader only because I know it contains Donna Haraway’s Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.

I have found the idea of situated knowledge very useful as both a feminist and someone who has a degree in science . This work has helped me reconcile these two selves. I also have found the concept useful in some of my own thinking in librarianship (see post: The Observer or Seeing What You Mean)

I like this definition of Situated Knowledge from The Oxford Dictionary of Human Geography:

The idea that all forms of knowledge reflect the particular conditions in which they are produced, and at some level reflect the social identities and social locations of knowledge producers. The term was coined by historian of science Donna Haraway in Simians, Cyborgs, and Women: the Reinvention of Nature (1991) to question what she regarded as two dangerous myths in Western societies. The first was that it is possible to be epistemologically objective, to somehow be a neutral mouthpiece for the world’s truths if one adopts the ‘right’ method of inquiry. The second myth was that science and scientists are uniquely and exclusively equipped to be objective. Haraway was not advocating relativism. Instead, she was calling for all knowledge producers to take full responsibility for their epistemic claims rather than pretending that ‘reality’ has definitively grounded these claims.

We can and should take full responsibility for what we see and to recognize that what we see is what we choose to see.

To not read the works of people who are unlike ourselves is a choice. We can do better. I know I can do better. I’ve made first steps, but I know I c=could do more. I am considering following the lead of Ed Yong who actively pursued a better gender balance in his reporting:

We can’t see if we don’t even try to look.

Chasing Shadows

Last Monday when Dr. Rajiv Jhangiani opened his keynote at the 2018 Open Education Summit, one of the first things he did was place his work in the context of bell hooks and Jesse Strommel. And after hearing this my internal voice said to itself, “O.K. now I know where he’s coming from.”

It’s an admitted generalization but let me suggest that when academics compose a scholarly article they tend to introduce their work with a positioning statement that expresses the tradition of thought that their work extends. This might be done explicitly like Dr. Jhangiani did in his keynote or quietly through the careful choice of whose definitions were used to set the table for the work.

The adjective ‘scientific’ is not attributed to isolated texts that are able to oppose the opinion of the multitude by virtue of some mysterious faculty. A document becomes scientific when its claims stop being isolated and when the number of people engaged in publishing it are many and explicitly indicated in the text. When reading it, it is on the contrary the reader who becomes isolated. The careful marking of the allies’ presence is the first sign that the controversy is now heated enough to generate technical documents.

Latour B. Science in action: how to follow scientists and engineers through society. Cambridge: Harvard University Press; 2005. p. 33.

If scholarly communication is a conversation, then we can think of journals as parlors, where you can expect certain conversations are taking place. If your work becomes a frequent touchpoint of these conversations you get… a high H-index?

As someone who is only five months into my position of Scholarly Communications Librarian, I’ve been particularly mindful of how people talk about scholarship and the various measures and analytics we use to describe scholarly work.

I try to keep in mind that metrics are just a shadow of an object. You can make a shadow larger by trying to angle yourself in various ways towards the sun but you shouldn’t forget that when you make your shadow larger this way, the object casting the shadow does not change.

I was approached recently by a peer who had a faculty member tell them that they are hesitant to add their work in the university’s repository because they are afraid that it would take away from the linking to their work on SSRN and thus would diminish their Google Scholar ranking.

What should be the response to these concerns? One thing we could do is reassure them that we are doing all we can [ethically] do to maximize the SEO of our IR.

But I believe that it would be better to express our work not in terms of links and citation counts but rather in terms of potential readership.

We could try to reframe the conversation so it didn’t seem so much of a zero-sum game. There are a set of readers who will discover work as a pre-print in SSRN and there will be another set of readers who will be interested in the work that they discover in an institutional repository. These interested readers could include a potential graduate student who is looking for an advisor to work for. It could be someone who has discovered the work in the IR because we allow other subject specific sites to index our institutional repository. It could be the local press. And, if the fears of SSRN link-cannibalization are still strong, we can always offer to place the work in the IR under a short-term embargo.

When we only think of metrics, we end up chasing shadows.

When faculty member assesses the quality of a peer’s work, they take the publication source as a measure of quality of that work. The unsaid rule is that each scholar, if they could, will always publish in the highest ranked journal in their field and any choice to publish anywhere else must be only because the work in question was not good enough. Any article published in a higher ranked journal is better than any article in a lower ranked journal.

And yet it’s easy to forget that the ranking behind ‘highly ranked journals’ are calculated using formulas that process the collected sum and speed of citation. In the end, journal ranking can also be re-considered as a measure of readership

Instead of positioning our work as ‘how to increase your h-index’, we should not forget that each citation is an author who we can also consider (perhaps charitably) a reader.

When I was the lead of Open Data Windsor Essex, we hosted a wonderful talk from Detroiter Alex Hill called Giving Data Empathy.  What he reminded us in his talk was that behind each data point in his work was a person and that it was essential to remember how diminished that person is when they are reduced to a ‘count’.

Let’s remember this as well.

Every data point, a reader.

Bret Victor, Bruno Latour, the citations that bring them together, and the networks that keep them apart

Occasionally I have the opportunity to give high school students an introduction to research in a university context. During this introduction I show them an example of a ‘scholarly paper’ so they can take in the visual cues that might help them recognize other scholarly papers in their future.

 

After I point out the important features, I take the time to point out this piece of dynamic text on the page:

I know these citation counts come from CrossRef because I have an old screen capture that shows that the citation count section used to looks like this:

I tell the students that this article has a unique identifier number called a DOI and that there is a system called CrossRef that tracks how many bibliographies where this number appears.

And then I scan the faces of the room and if I don’t see sufficient awe, I inform the class that a paper’s ability to express its own impact outside of itself is forking amazing.

The ability to make use of the CrossRef API is reserved for CrossRef members with paid memberships or those who pay for access.

This means that individual researchers cannot make use of the CrossRef API and embed their own citation counts without paying CrossRef.

Not even Bret Victor:

 

The image above is from the end of Bret Victor’s CV.

The image below is from the the top: of Bret Victor’s CV which describes himself through the words of two notable others:

 

I like to think that the library is a humane medium that helps thinkers see, understand, and create systems. As such, I think librarians have much to learn from Bret Victor.

Bret Victor designs interfaces and his thinking has been very influential to many. How can  I express the extent of his influence to you?

Bret Victor chooses not to publish in academic journals but rather opts to publish his essays on his website worrydream.com. The videos of some of his talks are available on Vimeo.

Here are the citation counts to these works, according to Google Scholar:

 

 

It is an accepted notion that the normative view of science expounded by Merton, provided a sociological interpretation of citation analysis in the late 1960s and 70s. According to his theory, a recognition of the previous work of scientists and of the originality of their work is an institutional form of awarding rewards for efforts. Citations are a means of providing such recognition and reward.

The above is the opening paragraph of, “Why hasn’t Latour’s Theory of Citations Been Ignored By the Bibliometric Community? Discussion of Sociological Interpretation of Citation Analysis” by Terttu Luukkonen.

Latour’s views of citations are part of his research on the social construction of scientific facts and laboratories, science in the making as contrasted with ready made science, that is beliefs which are treated as scientific facts and are not questioned… In this phase, according to Latour, references in articles are among the resources that are under author’s command in their efforts at trying to “make their point firm” and to lend support to their knowledge claims. Other “allies” or resources are, for example, the editors of the journals which publish the articles, the referees of the journals, and the research funds which finance the pieces of research…

Latour’s theory has an advantage over that of Merton’s in that it can explain many of the findings made in the so-called citation content and context studies mentioned. These findings relate to the contents of citations, which are vastly different and vary from one situation to another; also the fact that the surrounding textual contexts in which they are used differ greatly. Such differences include whether citations are positive or negational, essential to the references text or perfunctory, whether they concern concepts or techniques or neither, whether they provide background reading, alert readers to new work, provide leads, etc.

The above passage is from page 29 of the article.

On page 31, you can find this passage:

The Latourian views have been largely ignored by the bibliographic community if their discussions about citations. The reasons why this is so are intriguing. An important conceptual reason is presumably the fact that in Latourian theory, the major of references is to support the knowledge claims of the citing author. This explanation does not legitimate major uses of citation indexing, its use as a performance measure – as in the use of citation counts which presupposes that references indicate a positive assessment of the cited document — or as an indication of the development of specialties – as in co-citation analysis.

You may have heard of Bret Victor just earlier this week. His work is described of in an article from The Atlantic called The Scientific Paper is Obsolete. Here’s What’s Next.

 

The article contains this passage:

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”

Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.

Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself.

The article goes on to present two software driven alternatives to the PDF paper-mimicking practices of academia : notebooks from private company Mathematica and open source Jupyter Notebooks.

Perhaps it was for length or other editorial reasons but the article doesn’t go into Bret Victor’s own work on reactive documents that are best introduced by his self-published essay called ‘Explorable Explanations‘.  There is a website dedicated to collecting dynamic works inspired by Bret’s essay from Nicky Case, who has created some remarkable examples including Parable of the Polygons and The Evolution of Trust.

Or maybe it’s not odd that his work wasn’t mentioned.  From T. Luukkonen’s Latour’s Theory of Citations:

The more people believe in a statement and use it as an unquestioned fact, as a black box, the more it undergoes transformations. It may even undergo a process which Latour calls stylisation or erosion, but which Garfield called obliteration by information, that is, a scientist’s work becomes so generic tot he field, so integrated into its body of knowledge that people neglect to cite it explicitly.

At the end of 2013, Bret Victor published a page of things that ‘Bret fell in love with this year’. The first item on his list was the paper Visualization and Cognition: Drawing Things Together [pdf] from French philosopher, anthropologist and sociologist, Bruno Latour.

On page five of this paper is this passage, which I came across across again and again during my sabbatical when I was doing a lot of reading about maps:

One example will illustrate what I mean. La Pérouse travels through the Pacific for Louis XVI with the explicit mission of bringing back a better map. One day, landing on what he calls Sakhalin he meets with Chinese and tries to learn from them whether Sakhalin is an island or a peninsula. To his great surprise the Chinese understand geography quite well. An older man stands up and draws a map of his island on the sand with the scale and the details needed by La Pérouse. Another, who is younger, sees that the rising tide will soon erase the map and picks up one of La Pérouse’s notebooks to draw the map again with a pencil . . .

What are the differences between the savage geography and the civilized one? There is no need to bring a prescientific mind into the picture, nor any distinction between the close and open predicaments (Horton, 1977), nor primary and secondary theories (Horton, 1982), nor divisions between implicit and explicit, or concrete and abstract geography. The Chinese are quite able to think in terms of a map but also to talk about navigation on an equal footing with La Pérouse. Strictly speaking, the ability to draw and to visualize does not really make a difference either, since they all draw maps more or less based on the same principle of projection, first on sand, then on paper. So perhaps there is no difference after all and, geographies being equal, relativism is right. This, however, cannot be, because La Pérouse does something that is going to create an enormous difference between the Chinese and the European. What is, for the former, a drawing of no importance that the tide may erase, is for the latter the single object of his mission. What should be brought into the picture is how the picture is brought back. The Chinese does not have to keep track, since he can generate many maps at will, being born on this island and fated to die on it. La Pérouse is not going to stay for more than a night; he is not born here and will die far away. What is he doing, then? He is passing through all these places, in order to take something back to Versailles where many people expect his map to determine who was right and wrong about whether Sakhalin was an island, who will own this and that part of the world, and along which routes the next ships should sail.

Science requires a paper to be brought back from our endeavours.

I thought of Latour when I read this particular passage from The Atlantic article:

Pérez told me stories of scientists who sacrificed their academic careers to build software, because building software counted for so little in their field: The creator of matplotlib, probably the most widely used tool for generating plots in scientific papers, was a postdoc in neuroscience but had to leave academia for industry. The same thing happened to the creator of NumPy, a now-ubiquitous tool for numerical computing. Pérez himself said, “I did get straight-out blunt comments from many, many colleagues, and from senior people and mentors who said: Stop doing this, you’re wasting your career, you’re wasting your talent.” Unabashedly, he said, they’d tell him to “go back to physics and mathematics and writing papers.”

What else is software but writing on sand?

I wanted to highlight Bret Victor’s to my fellow library workers for what I thought were several reasons. But the more I thought about it, the more reasons came to mind. But I don’t want to try your patience any longer so consider this a potential beginning of a short series of blog posts.

I’ll end this section with why wrote about Bret Victor, Bruno Latour, and citations. It has to do with this website, Northwestern University’s Faculty Directory powered by Pure:

 

More and more of our academic institutions are making use of Pure and other similar CRISes that create profiles of people that are generated from the texts we write and the citations we make.

Despite Latour, we are still using citations as a performative measurement.

 

I think we need a more humane medium that helps thinkers see and understand the systems we work in.

The Tom Longboat Awards as Wikidata

This year I helped out and participated in two Wikipedia ‘editathons.’ In March I assisted in the Art + Feminism Wikipedia Edit-A-Thon hosted at Hackforge and in November I was at the Editathon on Elite Aboriginal Athletes in Canada held on the University of Windsor campus in conjunction with the North American Society for the Sociology of Sport Conference.

It’s only now that I am following up with an exploration of the potential of linked data through Wikidata that was first planted in my head by the incomparable Dan Scott from his presentation Wicked data with Wikidata from waaaay back in February of this year. Dan Scott did not steer me wrong. The potential power of Wikidata is very impressive.

The timeline above is a live query to Wikidata using this code. I could have added other search parameters, such as who are the wrestlers who have won the Tom Longboat Award or which women but that really doesn’t express the power of searching Wikidata. Wikidata – if the data is available – allows you to find painters who were the sons of painters and who are current women mayors of cities over a million residents and which authors wrote their first published work after the age of 45. It’s a fundamentally different type of searching that allows for language normalization and data reuse at scale.

Necessary care must be given when working with indigenous people and subject matter and I hope that my contributions to the Tom Longboat Award on Wikipedia and Wikidata pass muster. Stacy Allison Cassin, current W.P. Scott Chair in E-Librarianship at York University, is working on a project entitled Advancing Reconciliation and Social Justice in Libraries through Research Library and Community Collaboration in Wikimedia Projects that I’m following with interest.

It’s not often that I recommend watching a three hour video, but I am going to recommend watching this three hour video: A Gentle Introduction to Wikidata for Absolute Beginners [including non-techies!]. Presenter Asaf Bartov does an exceptional job of slowly building understanding and bringing the viewer with him at a pace that doesn’t feel rushed but, like a good three hour walk, leaves you amazed at what ground you have managed to cover when you are finished.

For those of you who are reluctant to watch a three hour video before you know that you can apply it to your library work, please consider getting to the 2018 OLA Superconference to see Dan Scott, Stacy Allison-Cassin, Monica Fazekas, and Carolyn Doi present on Wikimedia Edit-A-Thon: Get Your Library on Wikidata, Wikipedia, and Wikimedia Commons. Like me, you might not see the immediate reason to incorporate Wikidata into your work. But give it time.

The Pattern Language of the Library

I am an olds.

When I first started working at the University of Windsor in July of 1999, the first floor of the Leddy Library was largely taken up by stacks of reference books. The largest collection of the library’s private and semi-private study carrels were on the second floor.

Keeping in mind that ideally reference materials are close at hand when one is writing why would our library actively separate reading and writing activities through in its architecture?

I think there must have been a variety of reasons behind why it was decided to place the study carrels on the second floor with the most obvious being that the library was designed to keep activities requiring concentration away from the distraction of people walking into the library and through its space.

But there’s another reason why such a separation existed which is suggested by the fact you can find an electrical outlet in every single study carrel on the second floor at even though the building came to be decades before laptops were available.

The answer is typewriters. Noisy, clattering typewriters.

I didn’t make this connection myself. That insight came from this Twitter conversation from 2014.

While there is a rich conversation to be had about how some of the information literacy practices that separate research and writing as separate processes may have resulted from vestigial practice based on avoiding typewriter noise, I’m more interested in exploring what the affordance of laptops might mean to the shape of the spaces of and within the library today.

The book did not kill the building.

The laptop will change our chairs.

Our print reference collection is now in the basement of the West Building of the Leddy Library. Much of the space on the first floor of the Main Building is filled with banks of computer workstations that we used to call our Learning Commons.

But the perceived need for banks of workstations was waned in libraries. You don’t see as many seas of desktops in newly constructed library buildings. Now the entire library is perceived as a Learning Commons.

The image above that references the Learning Common concept is from Steelcase who design furniture for offices and other spaces such as libraries like GVSU’s Mary Idema Pew Library:

I was recently looking through the Steelcase product catalogue and I was taken by the way that the company makes very clear how the form of their furniture is tightly associated with function.

(If you are a subscriber to my newsletter: in the above video there’s a reference to that theory that I wrote about which suggests that the most comfortable seating is one when you feel protected from the back.)

When I read about their turnstone Campfire suite of products it reminded me of a book I read sometime ago called make space: How to Set the Stage for Creative Collaboration. I found the book on our shelves, took it down and leafed through the book and found this:

 

While make space makes no specific allusion to A Pattern Language by Christopher Alexander et al. I feel it’s almost impossible not to conclude that it must have provided some inspiration.

An except from A Pattern Language: 251 Different Chairs

From A Pattern Language:

People are different sizes; they sit in different ways. And yet there is a tendency in modern times to make all the chairs alike.

From Twitter:

From A Pattern Language

Of course, this tendency to make all chairs alike is fueled by the demands of prefabrication and the supposed economies of scale. Designers have for years been creating “perfect chairs” — chairs that can be manufactured cheaply on mass. These chairs are made to be comfortable for the average person. And the institutions that buy chairs have been persuaded that buying these chairs in bulk meets all their needs.

I particularly like this excerpt from A Pattern Language because I know an example of this very tension. In 2014 I sat in on the 2014 Library Interior Design Award Winners presentation at ALA Annual. There the interior designer being celebrated publicly lamented the fact that the NCSU Library opted for a wide variety of chairs including many that did not match the larger aesthetic of the space. Then the librarian spoke and told us that said chairs were so loved by students that some of them made a Tumblr of them in their honor.

I think we fundamentally underestimate how much a difference a variety of chairs can make in the experience of a place.

For example: this is a science classroom.

Kids come early to get the best seats.

Here’s another example. This picture is of a community bench that the neighbour of Dave Meslin made available for others.

My neighbours cut ten feet off their shrub, and replaced it with a community bench! ❤️

A post shared by dave meslin (@davemeslin) on

A community bench is what I would consider an example of tactical urbanism – a phrase that I like to think I first heard from People from Public Spaces. I am looking forward to reading Karen Munro’s Tactical Urbanism for Librarians: Quick, Low-Cost Ways to Make Big Changes.

I should also say that I’m not the first librarian to try to bring in Pattern Language thinking to how we design our spaces. In 2009 William Denton and Stacey Allison-Cassin explained their “vision of the One Big Library and how Christopher Alexander’s pattern language idea will help us build it.”

In reviewing their talk for this blog post I re-read from their slides this quotation from A Pattern Language:

This is a fundamental view of the world. It says that when you build a thing, you cannot merely build that thing in isolation, but must also repair the world around it, and within it, so that the larger world at one place becomes more coherent, and more whole; and the thing which you make takes its place in the web of nature, as you make it.

I had forgotten that I read that particular phrase – repair the world – in that text.

About three months ago I started a special interest group at Hackforge called Repair the World which is “a monthly meet-up of those who want to learn more about the technologies and policies we can employ in Windsor-Essex to lead us towards a carbon-neutral future and to help our community cope with the effects of global warming”.

For our first meeting, I didn’t do much other than set a time and place, give one suggested reading for potential discussion, and help set up the chairs in the space in a circle for our discussion.

In The Chairs Are Where the People Go, Sheila Heti transcribed and edited this advice from Misha Glouberman:

There’s a thoughtlessness in how people consider their audience that’s reflected in how they set up chairs. You can see that thoughtlessness immediately…

… At a conference, if you want to create a discussion group, you can set up the chairs in a circle, and you don’t need a table…

… Setting up chairs takes a lot of time, but anyone can do it. If you’re running a project and you want to get people involved, ask them to set up chairs. People like to set up chairs, and it’s easy work to delegate. It’s even easier to get people to put chairs away.

Everyone should know these things.

Save

Save

Save

Learning Objects: Teach Me Goodness, Discipline and Knowledge

Last week, a tweet pointing to this article “A Stanford researcher’s 15-minute study hack lifts B+ students into the As” caught my attention.

The article describes how Stanford researcher Patricia Chen improved her class’ performance in a test by sending out a 15 minute pre-survey designed to get them thinking about how they were going to prepare. Chen was applying a metacognition intervention in her teaching practice.

According to the Educational Endowment Foundation (EEF), which performs studies to try and close achievement gaps, metacognition is one of two of the most effective educational interventions it has tested. (Feedback is the other.) Students involved in programs designed to improve how they think about thinking accelerated their learning by an average of eight months’ worth of academic progress. The effect was greatest for low-achieving and older pupils.

This article reminded me that I had unfinished work to do.

Some months ago I quietly launched a project that I designed as a librarian’s “intervention” to help students think about their thinking. It is a box of objects and zines that was made available at the Leddy Library’s Course Reserves Desk called ‘Learning Objects’.

The west building of the Leddy Library features a cornerstone that bears the motto of the University of Windsor: TEACH ME GOODNESS DISCIPLINE AND KNOWLEDGE.

Learning Objects is a box of objects that you can borrow from the Leddy Library. Each object is  accompanied by a booklet that let you know how these things can teach you GOODNESS, DISCIPLINE, and KNOWLEDGE.

And yet I had not yet properly explained the thinking behind my thinking behind this project. I had meant to write up my work but I found I kept putting it off. This is particularly ironic because one of the objects in the box was specifically chosen by me to help students deal with procrastination.

Panic Monster and Box

So let me turn the dial of my tomato timer and do the work.

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide.

The text of "The Pomodoro Technique" zine can be found here.

If I had to define what was the starting point of my Learning Objects project, I think it would have to be this tweet from McMaster University’s Andrew Colgoni:

In response, I tweeted back:

 

There are passages in Clive Thompson’s Smarter than you think: How Technology Is Changing Our Minds for the Better that still sit with me since I read the book in 2013.

For example, his work was the first instance that I’ve ever come across that suggested that it is the writer’s desire for a ‘speed of recall’ that is as close as possible to one’s speed of thought which is the core driver behind the reader’s need for convenience – a drive that always puts library resources (whether in print but across campus or online but behind a tiresome authentication process) at a permanent disadvantage to any other text that was closer to hand even when the reader states that they appreciate the experience of browsing items on library shelves.

As with Drexel, Dewey, and Otlet before him, [Vannevar] Bush argued that speed of recall was key. Without it, one’s external store of facts would be useless. When he called his invention “an enlarged intimate supplement” to memory, the crucial word wasn’t so much “enlarged” or “supplement”; books had long enlarged and supplemented our minds. No, it was “intimate”—the idea that the memex would be physically and cognitively proximal, in a nearly sensual fashion. That was a key to its power. Indeed, Bush suspected the comparative difficulties of using libraries is what had prevented them from being of widespread use to the public. “Even the modern great library,” he wrote, “is not generally consulted; it is nibbled at by a few.” To truly harness our external knowledge, we needed to bring it closer to our minds.

But the passages that came to mind from that were prompted by Andrew’s tweet, was the the book’s introduction to the science behind the learning technique of “spaced repetition” which is based on the Ebbinghaus curve of forgetting:

Machines can also remind us of facts precisely when we need reminding. If you’ll recall the Ebbinghaus curve of forgetting from the second chapter, Ebbinghaus found that we forget things in a predictable pattern: More than half our facts are gone in an hour, about two thirds are gone within a day, and within a month we’re down to about 20 percent. Ebbinghaus and his followers theorized that this process could work in reverse. If you reviewed a fact one day after you first encountered it, you’d fight the curve of loss. This process is called “spaced repetition,” and experiments and anecdotes suggest it can work. It explains why students who cram for a test never retain much; the material dissolves because they never repeat it. But though spaced repetition is clever and effective, it has never caught on widely, because ironically, the technique relies on our frail human memories. How would you remember to review something the next day? Then a few days later, a week, and three months?

Machines, however, are superb at following these rote schedules. In the last decade, software programmers began selling tools intended to let you feed in facts, which the computer then reminds you to review on a reverse Ebbinghaus curve. Use of this software has remained a subculture, mostly by people seeking to learn a foreign language, though devout adherents use it to retain everything from recipes to poetry…

screenshot of kindle daily review

As librarians, we don’t concern ourselves with the memory work of our readers. Our focus is on the research process of scholarship and not on the learning and recall of said scholarship. And yet arguably more student time is spent studying in the library than researching within it.

For many of our students much of their time is spent in the learning of material. And despite the fact that some of our most prestigious students need to memorize content (there is a good chance that your doctor, as a medical student, used flash cards or memory palaces to learn the biomedical foundation of their care) educators and librarians frequently choose to focus their teaching on ‘higher level learning’ instead.

Appealing though it might be to offload the responsibility for teaching our students basic knowledge to their elementary school teachers or to Google, the research of cognitive psychologists who study learning and the basic study habits of most students suggest that we cannot do this. One of our first and most important tasks as teachers is to help students develop a rich body of knowledge in our content areas– without doing so, we handicap considerably their ability to engage in cognitive activities like thinking and evaluating and creating. As cognitive psychologist Daniel Willlingham argued, you can’t think creatively about information unless you have information in your head that you can think about. “Research from cognitive science has shown,” he explained, “that the sorts of skills that teachers want for their students — such as the ability to analyze and think critically — require extensive factual knowledge” (Willingham 2009, p. 25). We have to know things, in other words, to think critically about them. Without any information readily available to us in our brains, we tend to see new facts (from our Google searches) in isolated, noncontextual ways that lead to shallow thinking. Facts are related to other facts, and the more of those relationships we can see, the more we will prove capable of critical analysis and creative thinking. Students who don’t bother to memorize anything will never get much beyond skating the surface of a topic.

The above passage comes from James M. Lang, the author of Small Teaching: Everyday Lessons from the Science of Learning, which I found an extraordinarily helpful book. I included a passage from his “Small Teaching” in the “Teach me knowledge: Why study the facts” zine that I included in the Learning Objects box.

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide. 

The text of the "Why study the facts" zine can be found here.

I also included a separate zine dedicated specifically to the topic of spaced repetition. To accompany the zine, I included a small box of index cards in which the cards explained how to create a ‘Leitner Flashcard Game’ for one’s own learning goal.

Leitner Flashcard Game

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide. 

The text of the "Spaced Repetition" zine can be found here.

(Did you know that I’m into index cards? I’m really into index cards.)


It’s one of my theories that when people give you advice, they’re really just talking to themselves in the past. ~Mark Epstein

The zines that accompany the Rubber Duck in the Learning Objects box are really for my past self.

rubber duck and zines

Do you study by reading and re-reading your notes to yourself silently? Stop! I know it feels good, in a monkish, masochistic, pain equals progress sort of way to beat your brains against a book hour after hour, but it’s also a terribly inefficient way to review. Instead, lecture to an imaginary class, out-loud, about the main topics, without reading off your notes. If you can state an idea once, in complete sentences, out-loud, it will stick. You don’t need to re-read it a dozen times. If you can’t capture it out-loud then you don’t understand it yet. Go back. Review. Then try again.

That except is from Cal Newport’s Monday Master Class: 5 Bad Study Habits You Should Resolve to Avoid in 2008. It can also be found in the zine, “Teach Me Knowledge: Rubber Duck: Reciting”:

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide.

The text of the "Rubber Duck: Reciting" zine can be found here.

I’m particularly pleased that I found and was able to share an example of why you might want to use a rubber duck to improve both one’s computer debugging…

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide. 

The text of the "Rubber Duck Debugging" zine can be found here.

… as well as I why you might want to talk to a rubber duck to improve one’s engineering practice.

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide.

The text of the "Rubber Duck: Problem Solving: Engineering Design" zine can be found here.

tarot card: 3 of wands

If you were asked to fill a box of objects to give to a student to help them in their journey, what would you give to inspire DISCIPLINE, KNOWLEDGE, and GOODNESS?

It’s a bit of a cop-out but I chose two books for the objects by which I wanted to carry goodness. Well, two books and a deck of cards.

In the box of Learning Objects you can find a deck of Rider Waite tarot cards and Jessa Crispin’s The Creative Tarot

book cover: the creative tarot

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide.

The text of the "Jessa Crispin's The Creative Tarot" zine can be found here.

… and Ursula Le Guin’s Tau Te Ching.

book cover Tao Te ching

You can re-create this zine by using my production template [.docx] and following this helpful zine making guide.

The text of the "Ursua K. Le Guin's Tau Te Ching" zine can be found here.

Now all I have to do is figure out how to get the students to borrow the box 🙂

Save