Devices for remembering and forgetting

2400 years ago, an illiterate philosopher condemned writing. It will not make the people wiser, he argued, rather—

"It will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written, calling things to remember no longer from within themselves, but by means of external marks." — The Phaedrus

Socrates had a point. Before the advent of tablets and scrolls, humans exercised a greater capacity to internalize huge volumes of information (for example, imagine reciting ~1,800 pages of scripture from memory, like these priests in Kerala, India)

As writing overtook oral tradition, and type accelerated distribution, people soon forgot what the world was like before books and newspapers. 

Today, we forget what the world was like before the Internet. As we find ourselves beyond the event horizon of another information revolution, writer Nicholas Carr laments that the Web has rewired our brains:

"Our growing dependence on the Web’s information stores may in fact be the product of a self-perpetuating, self-amplifying loop. As our use of the Web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the Net’s capacious and easily searchable artificial memory, even if it makes us shallower thinkers." -The Shallows

It is this very reliance on outsourced memory which makes our relationship to data increasingly intimate and vital. By keeping the most critical bits of information that we need to remember and hope not to forget on hard drives or remote servers —and not in our heads— have we set ourselves up for a persistent condition of cultural amnesia, or worse, a Digital Dark Age when those systems fail?

At one time, all recorded knowledge in the western world—all history, science and literature— was stored in a single building. Our situation might seem less precarious. Distributed information networks were originally conceived by DARPA as disaster proof. A cultural cataclysm as total as the burning of Alexandria would seem unlikely, and yet the frightening fact is that much of our data is centralized, in physical storage systems, not securely backed up. Most web sites last for an average of 18 months. Our documents exist in proprietary formats that go obsolete, stored on disposable media designed for short-term use. In the long term, for individuals and societies alike, the steady erosion information through digital obsolescence could amount to incalculably greater losses.

Three years of news

Internet Archive announces plans to publish all TV news since 2009 on its servers: 350,00 broadcasts from 20 channels. 

“You have to see this service to believe it – and even then, you may not. The Internet Archive has harnessed today’s extraordinary advances in computing power and storage capacity to capture virtually every national U.S. television news program and allow users to find and view short streamed clips on any subject. This easily searchable and sortable database will be a fantastic resource for journalists, researchers, librarians and news junkies alike.”
– Andrew Heyward, former president, CBS News

Internet Archive

A former Christian Science Church in San Francisco houses the Internet Archive. The sturdy classical architecture—appropriate for an edifice that is at once a temple of knowledge, a library, and data vault— contains a greater volume of information than the Library of Congress, all of it kept on a modest array of drives.

Robert Miller, Director of Books, stands next to a petabyte of data (1 million gigabytes), storing a fraction of Archive.org, Wayback Machine, Prelinger Film Archive and Open Library, with mirrors of the collection at Bibliotheca Alexandria, Egypt and nearby in Mountain View, California. This collection represents the foundation of Brewster Kahle’s vision to build the Library of Alexandria version 2.0, providing everyone everywhere access to all the world’s knowledge, including books, movies, music and websites.


An activity monitor displays a real time tapestry of URLs as web crawlers-also known as spiders-continuously index snapshots on an endless traverse of the World Wide Web.

Borges’ Library of Babel comes to mind. 

Arctic backup for your 900 million friends and their 140 billion photos

Facebook’s proposed storage center in Lulea Sweden 60 miles south of the Arctic circle will keep data cool, using the environment as a natural heat sink.

18 degrees north, tucked away on the remote Island of Spitsbergen Norway, at a site selected for its stability, the Svalbaard Seed Bank a.k.a “Doomsday Vault” stores genetic backups of food crops with room for 4.5 million varieties. 

Venture into the icy tomb of the Cold Coast Archive, a project by Signe Lidén, Annesofie Norn & Steve Rowell 


IBM’s 120 million gigabyte array, the world’s largest data center, could hold 60 copies of the Wayback Machine, the Internet Archive’s backup of the entire web.

“This 120 petabyte system is on the lunatic fringe now, but in a few years it may be that all cloud computing systems are like it,” says Bruce Hillsberg, director of IBM storage systems in Almaden, California. 

Archive

Bytes were never built to last. Hard-drives inevitably fail; links rot; web services fold. The legacy of our civilization, our shared history and culture, depends upon the endurance of digital collections. 

Archive, a compendium of documentaries told from the perspective of archivists and cultural producers, looks at the history of the Internet and attempts to archive its contents on a massive scale: from Archive.org’s Wayback Machine to the Amazon Glacier.

The project takes the form of an archive of stories cached online, including a series of Wayback tours of the Internet through time and visits to the physical servers where collections are kept, woven together with information visualizations. The project explores questions of memory and posterity, how we choose what to preserve for future generations, and the risks of digital obsolescence from both a personal and global perspective.