This month, APTrust crossed 700 terabytes of unique preserved content. We are marking it, but not because the number is exceptional in itself. We are marking it because of what the trajectory reveals.
APTrust was founded in 2012. We entered production in December 2014. Growth in those early years was slow by necessity: digital preservation programs were still maturing at most institutions, workflows were largely manual, and the case for community-based preservation infrastructure was still being made. It took four years to reach our first 100 TB.
Then something shifted. Not all at once, but unmistakably.
A major rearchitecture of the APTrust platform, deployed in November 2022 when we had about 250 TB, was a prerequisite for everything that followed. The prior system was not built to sustain ingest at these levels. The investment in rebuilding the infrastructure quietly set the conditions for the acceleration that members have since delivered.
In the last twelve months alone, APTrust absorbed more content than it did in its entire first five years of operation.
What drove this shift? Partly, it is integrations. Members have connected APTrust directly to their local repository systems, so preservation happens continuously rather than in periodic batches. Content flows in because the workflow demands it, not because someone remembered to execute digital preservation workflows. But integrations are only the technical layer of a bigger change.
The more significant shift is institutional. Digital preservation has moved from a project to a program at many member institutions. It has budget lines, staff, and executive attention. It is no longer aspirational work deferred to a future grant cycle. It is simply how these institutions operate.
The content itself spans everything libraries collect: digitized special collections, born-digital archives, research data, electronic records, and audiovisual materials. This is not optimized data selected for storage efficiency. It is the full, messy scope of institutional memory, preserved because someone decided it was worth keeping permanently.
That decision is the thing worth marking.
The Next Milestone
At current rates, APTrust will cross one petabyte of preserved content within the next few years. That number will mean something different from 700 TB. A petabyte is a threshold that resonates beyond the digital preservation community: with administrators, with boards, with funders who may not know a terabyte from a gigabyte but understand that a petabyte represents something vast and consequential. It is the kind of number that makes the scale of what is at stake in digital preservation legible to people who have not been following the field for a decade.
More than the number itself, reaching one petabyte would mark a maturation of the community. It would mean that dozens of institutions have not merely committed to digital preservation in principle but have sustained that commitment through budget cycles, staff transitions, shifting priorities, and a decade of technological change. That kind of durability is rarer than any storage benchmark and worth far more.
We are watching the counter.
The work does not wait.
700 TB represents over ten years of institutional commitment, compounding. Every year that a digital preservation program remains on the planning horizon is a year of content created without a preservation strategy behind it. Born-digital materials do not become easier to preserve with time. They become harder, or they disappear.
The community around you has done the hard work of proving that this is achievable. The infrastructure exists. The models exist. Peer institutions have built the programs. What is needed now, at institutions still working toward this commitment, is the decision to treat digital preservation as a core function rather than a future aspiration.
700 TB is what sustained institutional will looks like, measured in terabytes, accumulated over a decade, accelerating.
Learn about joining APTrust →