Brett Hilton
Capstone Research
Last updated 18 December 2024
Foreword—Personal Essay
My earliest memory of demonstrating any interest in or awareness of typography comes from my elementary school days. I remember sitting at the family laptop in the basement of our home in Holladay, carefully formatting a Microsoft Word document that has long since been lost to some forgotten landfill full of hard drives. The document was titled, “Who Would Win?” in large, colorful, Wordart lettering. This title was followed by several image pairings of fictional characters, each labeled with their name set in a different Wordart style. I remember clicking through the various presets, looking for the one that adequately reflected the characteristics of each person I had identified. I distinctly remember choosing a metallic-colored sans serif for Thor, the marvel superhero. The text was extruded and featured a gradient that transitioned from a bluish silver to red—I remember feeling that the metallic blue and red were reminiscent of Thor’s red cape, his silver armor, and his iconic hammer, Mjolnir.
When I later entered junior high, beginning with the 7th grade, I remember writing my first ever essay in my first ever English class. The assignment was a research report on any animal of our choosing, and the word count requirement was 1000 words. I stayed up until midnight the night before the due date, cramming in as many fluff sentences about the Kiwi bird as I could in order to meet this requirement, naively counting the name, date, and title as part of my precise 1000-word measurement. When I received the paper back with a grade, I noticed, to my dismay, that the teacher had marked me down for not actually fulfilling the word count. I remember being devastated, feeling unjustly judged for an “honest” effort. What surprises me more as I recall this experience, however, is how she failed to comment on the fact that I had set each and every one of those 1000 words—and every other writing assignment I completed for that class—in the infamous font Papyrus. What was she thinking? Better yet, what was I thinking? Well, it was undoubtedly an aesthetic choice—a choice made from the underdeveloped frontal lobe of an ignorant yet detail-oriented designer in embryo. I seem to remember, however, that this particular English teacher had a number of signs and posters hung up around the classroom, most of which featured typography set in Papyrus. If this was the case as memory serves, then perhaps my unusual typographic choice was simply an expression of my nascent appreciation for unified typographic systems.
My fascination with typography seemed to only grow throughout my early teenage years, and it did so alongside my love for the Harry Potter series. I used to spend countless hours during the summer listening to Jim Dale recount the adventures of Harry and his friends on cassette tapes and compact disks. I checked the Pottermore website almost daily, eagerly anticipating the occasional release of new wizarding world lore from J. K. Rowling in the form of character and world-building vignettes. With all of this new material coming directly from the author, I seemed to think it deserved a permanent spot on my bookshelf along with the seven main series installments. So, I spent about three weeks one summer meticulously copying the short stories from the website, pasting them into a Google Doc, correcting punctuation and line breaks, and typesetting the entire file as one cohesive collection—complete with chapter headings, a table of contents, page numbers and footers, and a title page. I then printed and bound the pages into a book using recycled cardboard and duct tape. I made not one, but four of them: one for me and each of my Potterhead friends.
I share these stories almost as a form of self reflection. As I analyze my own interests, searching for the source of my obsession with typography, these are the experiences that come to my mind. I had nearly forgotten some of them, and as you can tell, several of the details have already faded from memory. But at the heart of each of these stories sits an unnatural (as it seems to me) attention to typographic detail and commitment to creation. In other words, I was a weird kid with weird hobbies. But I can see now from where I sit, having nearly earned my undergraduate degree in graphic design, how these experiences speak to an obsession that was always there. It seems that typographic thinking has been a part of my life, even my identity, for far longer that I had imagined.
I honestly believe the same is true of each of us. I don’t mean to suppose that every child of my generation spent as many hours as I did testing all the typefaces in Microsoft Word (though if nostalgia memes are any indication, a large percentage actually did), but I do mean to hypothesize that typography forms a crucial part of not only our individual identity, but also our cultural identity, and societal way-of-life. I’ve spent even more time thinking about this topic in recent years as generative AI has taken the world by storm. My initial reactions to the new technology were admittedly very pessimistic; I spent most of my time migrating between the camps of AI cynics, alarmists, and defeatists. More recently, however, I have found myself agreeing with the claims of the idealists and futurists. I don’t believe that AI will be the death of the design industry, but I can’t help but wonder how it will change, or even advance, the typographic world. It is this curiosity that has led me to wonder, how did people in Geutenberg’s day respond to the first printed bible? What did the haters fear, and what did the advocates foresee? I imagine that by reviewing not only the tangible effects of historical advances in technology on typography but also the public perception of those advances, we can learn how to think about our moment in typographic history—what we should fear, and what we might foresee.
On a more personal level, I’ve recently been thinking more critically about my relationship with the digital world of design. Beyond generative AI, I’ve become increasingly disillusioned with our general acceptance, or at least allowance, of targeted advertising, trading personal data, misinformation, and many more questionable byproducts of digital media. When I think about my love for typography, I can’t help but balk at some of the abuses of typographic power. These are the kind of feelings that make me want to throw my Macbook Pro into the Provo river and go camping in the Uintas for a decade. In all honesty though, I think there is something to be said for the healing power of analog processes. I don’t know if healing is the right word, but I can’t think of a better one to describe it. The time required to set metal type leaves space for one to be alone with one’s thoughts. The unpredictable nature of printing on an antique press forces one to grapple with crippling perfectionism. The feeling of satisfaction that comes from holding one’s own hand-crafted book is thrilling and singular. The practice of journaling by hand enables one to slow down, process, and make new connections. What do I lose by not engaging in traditional processes? What do I gain by adopting the latest technologies? Is there a balance I must strike in order to not only improve my typographic skill, but also preserve my identity and keep my sanity? These are the questions I ask myself.
Introduction
Last year, American journalist and thinker Jeff Jarvis published these words in his new book, The Gutenberg Parenthesis, in which he conducted a retrospective of the the age of print in search of lessons for us in the age of the internet:
We are fortunate to live in a moment of contrast when—by examining what is different, what we might lose, what we are fighting to save, and what we may invent—we can better examine what has been: what it meant to live in the time of text and what freedoms, limitations, and presumptions it implied in our worldviews. We live in a moment of choice, and it is good and necessary that we examine what we might gain and lose.
The book is an exercise in comparing and contrasting the changes, responses, and effects of various printing technologies that developed between Johannes Gutenberg’s day and the internet age. By applying a similar exercise to the current typographic moment—one that is saturated with artificial intelligence technology—I believe that we can discover patterns that can serve to benefit type designers individually and the typographic industry as a whole. Ultimately, I endeavor to demonstrate that an understanding of and engagement with past technological advancements in typography empowers each of us with an improved ability to communicate, greater security within a historical context, and a stronger sense of identity and community in the face of AI typography and design. In order to accomplish this, I have separated the paper into two parts. Part one will look backward, observing the effects and reception of historical developments in lettermaking, while part two will look forward, contemplating the same for AI. Throughout the paper, I will regularly analyze quotes from the past in order to make comparisons with the present typographic moment. This analysis will primarily center on Latin letterforms and Western history.
Part I—Our Typographic History
We will touch on four major innovations in the realm of typography: Gutenberg’s printing press, automated typesetting, photo typesetting, and the digital age. With each of these technological advancements, let us consider three questions: what changed, how did people respond, and what effects did these changes have on letterforms, cultures, and individuals? To introduce this pattern of analysis, let’s look at an example of change that occurred several centuries before Gutenberg’s day.
During the late sixth and early seventh century in the Western world, the primary mode of communication and dissemination of information was oral. Writing simply served to support or record that which was spoken, and it was far from common for any lay person to become literate. The scribes of these days wrote in “scriptura continua, a relentless parade of letters in heavy scribal penmanship without separation, punctuation, or capitalization to delineate words.” What changed? Well, after taking some inspiration from Eastern scripts, Western scribes began to again incorporate spaces between their written words. And how did people respond? Let's put it this way: there were mixed reviews (a consequence which, as we will quickly discover, is quite the norm among disruptive innovations), and the reason behind this becomes clear when we ask our last question. What effect did this have? Well, for one, letterforms began to change. “As words took on new shape, so did the letters used to build them.” It was not many years later when Emperor Charlemagne commissioned a standard set of lowercase letters, which, when combined with capitals from Rome, ultimately became the Carolingian miniscule, a script that would later inform the roman typefaces we know today. More than just the shape of letters changed, however. In his book, Jarvis notes “the relationship of the simple space with privacy and individuality. In the time of scriptura continua, authors tended to dictate to scribes; thus writing occurred before an audience. When authors began to inscribe their own words, they could do so in privacy. Private reading gave way to private writing, which made way to private thoughts—as well as private heresies, religious skepticism, [and] political subversion.”Individuals began to develop their own thoughts and their own sense of self because of the liberty which literacy provided. Education became more general, and began to threaten the cultural and hierarchical establishments of the time. In this way, one simple shift in the art of lettering had major societal consequences downstream. Are there perhaps similar consequences we can observe with the aforementioned typographic advancements? Well, buckle up: there is a lot to unpack within our typographic history.
Phototypesetting, the Typographic Middle Child
The next world- and letterform-changing development in typography arrived in the mid-twentieth century in the form of phototypesetting. As disruptive as this technology was, however, many history books and classes skip over the phototype era because “it was just too short a period of time,” and because the process relied on now antiquated device-specific parts and chemicals; “you can’t use those machines for anything today.” Unlike Linotype machines and other mechanical methods for which independent publishers still have meaningful uses, the majority of phototype machines have been consigned to a life behind display-case glass. Like a typographic middle child, “no one sees them,” and so they are often overlooked. Nevertheless, the mid twentieth century begat some of the most important and unusual changes, responses, and effects that we have yet seen in our typographic study.
What Changed?
The key difference with phototypesetting was the use of photosensitive paper to produce plastic plates instead of lead castings to produce metal plates; typographers informally called this method “cold type” as opposed to the “hot type” methods it replaced. The machines worked by projecting light through a film negative with type characters onto light-sensitive paper which could then be used to create the plates for printing. This process, and the later incorporation of CRT screens, significantly streamlined printing and enabled greater flexibility in type design. For one, it became much cheaper to store type since printers could “stock, in sheet form, a variety of types which would be a great deal more expensive” stored in metal. This also opened typographers up to a variety of services “which might otherwise [have been] impractical” because it enabled “the uniform perfection of characters” and “precise control of spacing,” according to one author.
Though, depending on how you spin it, phototype may actually have allowed for arguably lower typographic fidelity and less flexibility. “The problem was fonts,” specifically producing large libraries of typefaces in film to meet the demand of designers and printers. As it was, the best collection of typefaces came from Monotype, Linotype, and the other major printers. So, companies like Compugraphic and Photon began to shamelessly copy their characters, and they did so without redress thanks to American copyright law which prohibits the legal protection of specific typefaces. These phototype companies “essentially took hot metal fonts, Linotype fonts, did printouts, then blew them up, [making them] fuzzy,” and then they “photographed them in cameras,” thereby creating new masters of existing typefaces but with a subtly reduced fidelity to the originals. We are still feeling the effects of these changes today; many traditional typefaces have endured multiple digitizations because designers have recognized the discrepancies between their metal type and phototype versions. There are still several widely used typefaces in the current type market that wear the names of the classics but express meaningfully divergent characteristics. And while phototype enabled greater typographic flexibility in theory, the incredible variety of existing machines, each with their own bespoke and constantly innovating chemical process (a necessity in order to avoid infringing on other phototype patents), actually made type more device-dependent. As type designer Matthew Carter explained, “phototypesetter fonts were even more specific, not only to the manufacturer, but to the machine model too;” so, as a designer, if you wanted to use a specific typeface but your chosen printer did not have it in their phototype machine, you would either have to spend more money to employ the services of a second printer or surrender your typographic vision in favor of something cheaper. Such creative limitations can have broader effects than merely disgruntling type nerds, but we will get to those effects a little bit later.
How Did People Respond?
Turning again to the reactions these technologies elicited, it should now come as no surprise that the most prominent, pessimistic response constituted a fear of losing jobs. In 1978, filmmaker David Loeb Weiss and type historian Carl Schlesinger released a documentary entitled Farewell etaoin shrdlu in which they observed and interviewed linotype operators on the last day of hot metal typesetting at The New York Times. When asked, “how do you feel about this changeover?” seventy-five-year-old linotype operator Albert Tanguin responded, “Well I feel that they call it progress, but as far as I’m concerned, it is not. I would like it to stay the way it was, you know? Keep the old machine running.” Despite the typical hustle and bustle of editors and compositors rushing to complete the next morning’s paper, a certain solemnity rested over the seemingly endless rows of Linotypes and composing tables filling the underground room. And as with the Linotype before it, there was a feeling of inevitability associated with phototypesetting. A later conversation with a linotype operator reemphasized this point:
‘I find it very sad. Very sad. I’ve learned the new stuff, the new processes and all, but I’ve been a printer now for twenty-six years, and I’ve been [at The Times] for twenty years. … I hate to see it. It’s inevitable that we’re gonna go into computers. All the knowledge that I’ve acquired over these twenty-six years is all locked up in a little box now called a computer, and I think probably most jobs are gonna end up the same way.’
‘Do you think computers are a good idea in general?’
‘Oh there’s no doubt about it, they are going to benefit everybody eventually. How long it will take, I don’t know.’
Additionally, one author from the Inland Printer claimed that “it is certainly to the advantage of the industry that these new methods remain part of the printing service,” that they “eventually will make serious inroads into the present market for composition and typography,” and that “adaptability to change is the most desirable attribute of today’s printer.” Indeed, this feeling of inevitability seems to have become more common as the centuries roll on, perhaps because of the acceleration of technological development during the nineteenth and twentieth centuries up until the present day, a day where the old adage rings truer than ever: “the only constant is change.”
As for the more optimistic responses, yet again we see that “many printers” became “actively engaged in the specialty” and were “extremely enthusiastic about its future” because of how much money phototype would save them. They likewise spent a lot of money, investing time and capital to “test the equipment and train personnel to coordinate new procedures with old, well-established skills”.In fact, the International Typographical Union (ITU) was instrumental in securing a deal with The New York Times that required the paper to retain their Linotype operators and train them on the phototype CRT machines. Thus, the formation of unions, as a quasi-direct result of automated typesetting, apparently helped save future typographers from a legitimate threat of unemployment. Naturally, many typographers were also excited by the new technology simply because of the typographic liberty it provided. Type historian Frank Romano, who transitioned from working at Linotype to working as the first advertising manager for VGC in 1968, said in a lecture at the Cooper Union, “having a machine that allowed me to kern was a revelation, because in hot metal of course I was limited by the matrix. The ability to actually have type spacing … created, I think, a whole new level of typography.” Of course the idea of kerning type was far from new in the mid-twentieth century, but it was definitely less common because of the mechanical restrictions of hot type.
What Effect Did This Have?
One of the more interesting effects of phototype was the influx of more women into the printing industry. Romano illustrated how this happened in the previously referenced lecture, explaining that “all of [the phototype] machines” up until this point “required specialized keyboards.” However, when the Linotype company released their new linofilm machine in the 60s, they “used typewriter keyboards. They did not use the ‘etaoin’ approach because the ‘etaion’ was controlled by the unions, and the unions were very male dominated. When we now started to have typewriter keyboards, now other people could use it—not trained ITU people, but literally everyone,”especially many women who were trained on typewriters. We could accordingly categorize this shift in the workforce under second wave feminism, a movement from the ‘60s through the ‘80s which specifically sought equal employment rights for men and women. Behold, yet another cultural revolution with typography at its root.
Hello (and Goodbye) World
Now as we come to the last of our four historical analyses, expect the organization of this section to look a little bit different from the others. Instead of maintaining a clear sequential structure of what changed, how people responded, and what effects these changes had, our analysis will be a lot more nonlinear and chaotic, not unlike the digital wild west that it chronicles. And expect a lot of quotes from the eleventh edition of Emigre; those thirty odd pages are chock full of juicy interviews with renown graphic and type designers, conducted by Rudy VanderLans.
The changes to typography began as early as the first computers—not Apple’s Macintosh, but the giant, metal, cabinet-looking machines that filled large rooms from floor to ceiling. In his article “Computers, Printing and Graphic Design” from Design Quarterly in 1966, Kenneth Scheid gave an overview of how computers were changing and, once again, expediting the work of typographers, this time by automating hyphenation and justification of text blocks. The expedition continued with the proliferation of GUIs, then personal computing (including the Macintosh), followed closely by desktop publishing, and ultimately digital font formats like PostScript, TrueType, and OpenType. If, as I claimed earlier, the 1880s were an eventful decade for typographers, then the 1980s (and ‘90s) were downright monumental; we are still working to understand all of the effects these changes have had on typography, culture, and identity today.
The Return of the Generalist Designer
We might be able to summarize one of the most significant of these effects in the words of Matthew Carter: “The technology, migrating downwards, brought control over the form of documents to those who controlled the content.” Or as April Greiman put it, designers did not “necessarily have to imagine what [their] design [would] look like after it [came] back from the lithographer.” Essentially, the digital age marked the return of the generalist designer, a role which effectively had not existed since Geutenberg’s day. “Like many things that look new in the light of the latest technology,” said Carter, “this turns out to be less a revolution than a reversion to a previous state. The earliest type creators had no alternative but to be both designers and makers of their types.” They had to design, cast, set, and print their own work as a one-man typographic band. Designers today, likewise, “can control all aspects of production and design … bringing together a variety of disciplines and consequently streamlining production.” Simultaneously, however, as VanderLans explained “computer technology [provided] opportunities for more specialization. … Less peripheral knowledge and skills [were] required to master a particular niche. For instance, a type designer [was] no longer required to be a creative mind as well as a skilled punch cutter.” And as these specialized industries reintegrated (often in the skillset of a single designer), type design steadily returned to a space of childlike exploration and design basics. VanderLans wrote, “This return to our primeval ideas allows us to reconsider the basic assumptions made in the creative design process, bringing excitement and creativity to aspects of design that have been forgotten since the days of letterpress. We are once again faced with evaluating the basic rules of design that we formerly took for granted.”
As exciting as this reintegration was, it did not come without its share of familiar fears. Jeffery Keedy explained it this way:
Design is a business. And just as with any other business, there is a fear about how the computer is going to affect people's jobs. The other fear that designers have is of the computer's autonomous nature, the way that it defines itself—it has so much control. And I think that designers, out of ignorance and arrogance, feel that they have to relinquish control or transform themselves or their vision, which I think is actually not the case at all. And I'm finding it again and again with students and younger designers who are getting into this with no preconceptions or fears.
Curses! The fears of job security and substandard quality rear their ugly heads once again! Although, I tend to agree with Keedy that, as with generations past, we should not fear these changes so desperately as some alarmists tend to do. On the contrary, as Scheid explained at the advent of computers, “the graphic designer may find that these new systems will substantially expand the need for design services in the production of printed communication,” rather than eliminate them. As the technologies change, so do the specializations; and while many specializations become obsolete, they are replaced with newer and additional areas of expertise. Scheid follows this claim with a rallying call to graphic designers, saying, “this is all one can now accomplish: to alert the design profession and to encourage it to explore, whenever possible, the graphic potential of new technologies.” As with the other collection of mixed reactions we’ve examined, a voice always surfaces that portends inevitability and advocates for exploration into uncharted territory. As we will presently see, there was quite a lot of exploration during the digital age, and with it, came a multitude of fascinating responses.
The Democratization of Typefaces
Perhaps the most wild explorations came about because of the democratization of typefaces. The ease of transmitting digital files, especially once the internet was created, enabled typeface circulation at an unprecedented level. Forget film strips, let alone lead castings, all a designer needed was a personal computer and a PostScript file. Cyberpl@y: Communicating Online author Brenda Danet described this time period as a “font frenzy” led by “fontaholics,” people “without formal training in design or typography, … collecting and displaying their favorite fonts on the World Wide Web, and even designing their own.” They stretched, squashed, twisted, animated, and colorized these fonts in fascinating ways (or reprehensible ways, depending on whom you ask). “Emailers who had formerly been restricted to black and white plain text began to send each other graphic and sound files as email attachments, and to vary the size and shape of fonts and even font color in messages.” The typographically uninitiated treated type in a way that caused legendary designer Paul Rand to say that he “resents computer graphics—they are an affront to his sensibilities,” an opinion which Keedy shamelessly ripped apart by describing it as “understandable from someone who just can't see beyond the most superficial aspects of new work and new possibilities. His retirement is obviously past due. Computers are for the designers of the present, not designers of the ‘timeless’ past” (mic drop).
The radical accessibility of typefaces had other more practical effects on typography, though. One of those was the challenge of use case described this way by Matthew Carter:
Type has never before had so many output forms—on screens, on laser printers, on typesetters—across which to preserve its personality and its functionality. A device-independent technology requires a corollary in type design. In previous times, designers knew how their faces would be typeset, and could make allowance for that in the design. Typefaces came and went: technology stayed the same. In the current situation of multiple coexisting typesetting methods, designers can no longer predict exactly how a face will be used.
Instead, any designer concerned with consistency or principle in their typeface designs had to create something more robust, a typeface that could withstand the misuse and abuse of ignorance. This challenge, in combination with device-independent standards (like PostScript), subsequently contributed to a flourishing and competitive type design market; the font now existed “independent of the equipment manufacturers.” Digital type foundries began to pop up all over the world, echoing the analogous appearance of print shops all over Europe in the fifteenth century. Suddenly, anyone with a computer and a mind to make type, for better or for worse, could do just that.
The Democratization of Type Design
Thus, along with the democratization of typefaces, the digital age witnessed a democratization of type design itself. In her book, Danet writes about a website called “Fonts Anonymous,” one of the many “virtual gathering [places] for font devotees. … Many enthusiasts proudly displayed their own font designs on handsomely designed websites, offering them to others for free or for a modest price. Basic font design had become so easy that even children could create respectable-looking ones.”
As you can imagine, the reactions from the design community were mixed and extreme. To help us fully appreciate the breadth and depth of these reactions, I’ve opted to include several quotes below, along with a few of my own comments here and there.
One of more apathetic responses caught my attention because of how similar it sounded to opinions I’ve heard today about AI: “The Macintosh [can’t] do anything more than we could using traditional design tools.” Underlying this apparent indifference toward the innovations, I suspect there festered a fear of the inevitable disruption. Many today who fear the effects of AI design say similar things, postponing their need to face the perceived threat by minimizing its importance.
Others were more forthright with their apprehension. Initially, Matthew Carter responded negatively to the idea of lay type design. He said,
There is orthodox typography, and there is the growing vernacular, in which enthusiasts for the ballpoint, paint brush and spray can, have now been joined by the devotees of page layout programs, word processing software, and Fontographer. This looks more and more like a critical mass. The sense of what is sacred and what is profane about typography is under pressure, if nothing else.
Understandably, his chief concern was that removing barriers to type design would dilute the venerable art. Others were less verbose in their criticism. When VanderLans asked Rick Vermeulen, “Are you disappointed by the results of desktop publishing?” he responded, “Oh yes, very much, and the biggest disaster is everyone is a designer all of a sudden.” Vermeulen’s was a common sentiment, and one I still hear today, especially with the growing popularity of tools like Canva that templatize graphic design. Incidentally, Rick Valicenti of Chicago had strong words to say in response to opinions like Vermeulen’s:
When graphic designers complain and say things like: ‘Now everybody can be a graphic designer,’ I get pissed. Because it's not true. Not everybody can conceive a project, or get it printed, or even fit their solutions into some marketing scheme. Those are the jobs of the graphic designer.
Here, Valicenti is speaking to a sort of resilience that is required to succeed in type design. Some would argue that the level of effort Valicenti describes is not essential, but in his interview with VanderLans, Erik Speakerman offered great insight as to how Valicenti might be right. Firstly, he made it a point to differentiate between font design and type design. According to him, what the “fontaholics” were actually engaged in was more accurately defined as “font design”—simply packaging letterforms digitally, with or without typographic competence. Type design, in its truest sense, he defined by referring to the meticulous and mathematical principles of the classical masters—principles like optical corrections, counters versus apertures, and weight consistency. He explained that,
There are a lot of little-known rules and details that make a face work, and getting to know them takes a lot of practice. It is also extremely boring to draw almost two hundred characters after you've designed the concept with a few of them. And that's only one weight! Type design is also very time-consuming and the recognition is slight—type designers never become heroes.
At the end of the day, traditional type design requires a mountain of detail-oriented work. And it is not all for nothing; centuries of near-scientific experimentation transformed letters into their eventual twentieth-century forms, capable of accomplishing the grueling work we put them through. They had been forced to perform under the pressure of generations worth of philosophical musings, mathematical innovations, scientific discoveries, metropolitan navigation systems, and a hundred other applications. And as the world began to shrink with endlessly-connecting internet channels, the need for typographic diversity grew.
Out of both a recognition and excitement for this need, several designers had much more optimistic things to say about digital type design. Though initially pessimistic, Matthew Carter ultimately recognized and advocated the benefits of digital type:
Digital technology is a great preservative: the specimen books of extinct type foundries are being ransacked for revivable gems. It is also a stimulant to the creation of new type designs from the long-established type companies with honorable traditions of innovation, from the newer companies … concerned with furthering the progress of the art, and from unaffiliated designers encouraged by accessible tools to try their hand.
April Greiman elaborated on the good that would come from the work of those unaffiliated designers this way:
I think that we as designers are going to learn a lot. We're going to see people empowered with our visual language imitate us, a language that we have spent a lot of time learning and developing. We'll see them do everything from really terrible to very wonderful things and it will be a good learning experience for us. Everybody is visual, it's in the collective soul, and the Mac will empower and help a lot of these people to express themselves.
I find this to be an incredibly inclusive and healthy response. I believe that diversity makes us, not only better and more informed designers, but better and more empathetic people. As Valicenti pointed out, “The typewriter was never assigned to one class of people either.” Carter had a similar vision for how the type design industry should be:
In general, type design is seen as something best left to specialists. I hope this perception will change. In the type design course I teach to the graduate graphic design students at Yale, we use Macs and Fontographer to do practical exercises with the aim of making the whole subject less inhibiting. … Experience like this will, I hope, encourage graphic designers to regard creating or commissioning typefaces as a reasonable part of professional life.
This take is quite at odds with his earlier concern for the sanctity of typography, but as a student designer now sitting three-ish years into the AI boom, I can empathize with oscillating opinions.
Homogenization
We’ve examined a few negative reactions to digital type design, but why did these designers respond this way? What were they afraid of? The answer, overwhelmingly, is homogenization. There was a fear that with greater accessibility to design tools, further removed from traditional processes, amateur designers would create uninformed and uniform design work that did not inspire. Gerard Hadders lamented,
Design students … have almost no notion about graphic techniques. They have a very faint idea about what is possible. I think that computers will add to the distance between them and the final product. … Everytime the technology or the technique becomes dominant, you’ll see mediocre design
In many ways, Hadders was not wrong; homogenization inevitably follows viral innovation. Keedy agreed that digital type will contribute to homogenization:
I think it will … in a way that any other tool has. Typewriters certainly created a homogenous look to letters and at one time, press type created a certain homogenous look among headlines. Of course the computer will do this, too, in some respects, but I don't know what an alternative to that would be. It would be the case regardless, for any tool that we all use. But as the tool and its user become more sophisticated, the similarities become less apparent.
So while homogenization is a given, Keedy believed that design would ultimately evolve beyond that state. In Valicenti’s words, “time is the issue now.” He continued,
The results will appear very different from what we see now, because [designers] will bring much more past experience to this new process. We have seen this happen before, in situations where the letterpress typographer set type photomechanically. Their expectations of the process were different. Their typographic concerns surfaced. It's all a matter of time.
Future type design might even be better, according to Greiman. In reference to the transition away from photoelectric typesetting, she explained how “graphic designers were slowly removed from their direct contact with typography,” contributing to “a great deal of uninteresting and uninspired typography.” But thanks to the “closer relationship to type” and greater “control [over] every letter and every word,” that digital typography provided, the doors of possibility are open much wider, enabling greater variety and perhaps more interesting and inspiring typography.
In the end, I think Glenn Suokko summarized the situation quite well when he said,
There's always been bad design, now there's just a lot more of it. But, hopefully, there is still the need for good design as well. The Macintosh is only a tool, but it is a tool with tremendous possibilities. Everything depends on how this new tool is used.
This is one of the keys we must understand about the computer, artificial intelligence, and every other past or future typographic innovation: “homogenization of design is created by designers and not the tool.” This perspective turns designers into agents of change instead of defining them as subject to change; the responsibility for diverse and functional type design falls to them and how they decide to use the tools available to them.
Decisions, Decisions, Decisions
Rick Valicenti had a funny way of communicating the difference between design decisions and design tools. When asked, “will the Mac play a role in this experimentation?” Valicenti responded, “sure it will. Just as my wife will play a role in it.“ In so few words, Valicenti essentially sermonizes that everything is a tool in the designer’s toolkit. There are no fixed influences on his or her process, only decisions he or she must make.
This was where the meaningful change in type design really began, according to Keedy:
When you have more choices, it brings you into a realm where you become more sophisticated. The more choices you have, the choosier you become. It's like television. When there were only three TV stations, you maybe weren't all that selective and would watch television differently than now, when there are a hundred and ten stations, plus VCRs. You're a lot more specific about when, how and what you watch now. … You will begin to fine-tune your own sensibility. … It helps you be more discriminating and specific in your own aesthetic.
There were others who saw the decision explosion in a much less optimistic light, however. Vermeulen and Hadders were afraid that amateur designers would simply ignore the possibilities and “just use defaults. they'll use the programs as they are. Especially students, I notice, are too easily satisfied with the results. They'll stretch some type or squeeze it and they're satisfied.” Having made several default decisions as a student myself, this is a fair concern. It takes much more thought, effort, and risk to make bold choices in design. I think Greiman made the most accurate assessment of the digital situation, however, based on all that we’ve examined: “as designers we have always had to make choices and, we hope, the right choices. Now, with the advent of the computer, the possibilities are multiplied, but the goal is still to make the right choices.”
What Effect Did This Have on Letterforms?
Enough about who said what and why, how did letterforms themselves change during the digital age? You could say that this era encompassed the most change we have yet seen in letterform design, purely because of the tremendous number of new typefaces that entered the market. According to Frank Romano’s calculation in 2021, “we are now at a million fonts! … I save all the emails that I get every day from font foundries, and they don’t just introduce one font family, they introduce ten at a time, and then each one has different weights in it!” Computer softwares like Fontographer, Robofont, and Glyphs enabled molecular-level control over the shapes and proportions of individual characters. Type designers began to push the envelope with what was possible and what was legible. Many experimented for experimentation’s sake, while others designed to serve every brands’ need to differentiate, leaving us amidst an enormous, nigh-unnavigable sea of fonts. As Romano rightly wondered, “how anyone will deal with that in the future, I have no idea!”
Thanks to PostScript, TrueType, and OpenType, the digital age also marked the return of optical sizing in type. Optical sizing refers to the “subtle optical compensations” that early typefounders made to the proportions and spacing of metal type in order to make the characters “look looser and more readable at small size, tauter and more elegant at large size, and all within the consistent style of the type design.” Jeff Jarvis explains that “these careful gradations were lost to the economies of photocomposition” which scaled letters linearly, projecting from a single master film image. “But digital type is more manipulable. Inbuilt intelligence can control anamorphic transformations, or vary letterforms according to context.” This was a remarkable development in typography: to have a mathematically superior degree of accuracy, compared to the admittedly high degree of accuracy held by the masters of hand lettering, but paired with the speed and efficiency of electric signals. This left the doors of typographic discovery and application open even wider than before.
Despite the greater variety of circulating typefaces and granular level of control over the details of those faces, Matthew Carter argued that these developments would not change “the essence of the letterform” all that much. This is even true of the previous technologies we’ve discussed; when it comes to the underlying anatomy of the letters’ shapes, the technologies “have had a negligible effect on Latin type design,” with only a few exceptions (like the long “s” we examined previously). “Each new technique remained in the hands of exponents of the old method and was used to replicate what it replaced.” We should remember that letterforms represent language, and language does not change overnight, but instead slowly evolves over centuries of ebbing and flowing cultural waves. And now with the incredible preservative power of the digital age, I suspect that evolution may be moving at an even slower rate.
The Question of Copyright
At this point, we should touch on a concern that has existed for a long time but that became a much more heated discussion topic as the internet began to stretch its legs: the question of copyright. Copyright questions have pervaded the AI discourse, but they were also very common in the digital age. VanderLans explained it this way:
Digital data is easily modifiable and it is difficult to draw the lines of ownership and copyrights. Problems of piracy are already evident in areas of program development, type design, and illustration. For example, some illustrators using digital media now opt to submit hard copy artwork to clients rather than disk versions fearing that their illustrations could be copied and manipulated into a misrepresentation of their work, without deserved royalties. This brings up numerous previously unaddressed questions over ownership of data and our rights to use or even alter it.
While history has been rife with conflict over ownership, VanderLans made a great point that the digital age produced never-before-asked questions of ownership that were even more difficult to answer. We still don’t have answers to them today, but instead are creating newer, even less-answerable questions. If I were to produce a design using an AI model that was trained on one specific designer’s body of work, who owns the design? Do I own it, as the prompter? Does the developer who created the model own it? Or does it belong to the designer with whom’s work the model was trained? Courts are still debating the matter. Typography is a special case in the United States because of the U.S. Copyright Code which excludes typefaces from copyright protection, classifying them as utilitarian objects instead of artistic expression. The policies within the Copyright Act of 1976 effectively protect the digital files and proprietary softwares that typefaces today are built on, and it is still possible to receive trademarks or patents for typefaces in certain cases. However, there are still no ordinances that prevent individuals from ripping-off another type designer’s work. The effects and fallout of this reality are still in development.
The Question of Culture
Despite the many unanswered typographic questions and pending cultural effects of digital developments in the late-twentieth century as a whole, computers undeniably inaugurated a thrillingly chaotic cultural revolution. In the words of Valicenti,
Who would have ever imagined that we would be listening to synthetic or electronic music and be dancing to it? So naturally now we ask the question, "Who would ever be reading a book from a TV screen?" … It will all exist together. We all like the experience of holding a book in our hands, reading it, putting it down etc. That's just a different kind of relaxation or experience. And all this will be available to us. I don't think it will make our culture psychotic. It's going to make us more exciting. We're going to be able to choose from this rich buffet and there will be something for everybody.
Hot on the heels of Postmodernism, the internet age expanded that buffet to include stranger and more wonderful worlds of typography. Our letterforms began to adopt relativistic and pluralistic identities, each of them blending with each other in a cultural melting pot. This smorgasbord, however, raises relevant questions of its own about the identity and cultural significance of typefaces themselves.
Typography Isn’t Racist, You’re Racist
Do typefaces have inherent meanings? How does the answer to this question change the way we use and think about them? In what ways does our use of typography impact the culture and individuals around us? Before we talk about the future of typography, these questions deserve careful consideration: this is the titular topic that begins to help us understand why all this history of typography even matters in our lives and interactions with others today. By carefully considering typographic associations, we will begin to understand the cultural effects and implications of type design and type choices.
Some argue that typography is independent of associations. This line of thinking is generally paired with the feeling that the democratization of type and the fading of cultural memory are both innocuous inevitabilities. Theorists support this claim by pointing out that “typography is historically older than nations” and national identity. Not only is it older, but “type, as an industry that spreads languages,” actually “helped [to] engender the broad idea of nationalism.” Thus, “particular typographic styles” do not “have, [nor] should have, particular national attributes.” Instead, they were conceived free of associations, and we should use them with that liberty as our guiding principle. Researcher Mila Waldeck explained that “in the [fifteenth] century, typographers and types crossed areas that today represent national frontiers, and reproduced texts in different languages and cultures.” Therefore “typography itself is essentially international.” After all, “the identity of a country is not immutable,” so why should we treat typographic character with a level of rigidity that has perpetuated destructive nationalistic ideals?
Others argue that typography actually does have fixed associations. This matters to them because of the feeling that the democratization of type and the fading of cultural memory are culturally calamitous certainties. ”At one point letterforms had a social context because there was a social memory of where letterforms come from and their era,” said print designer and professor Rob Buchert. “Digital technology has destroyed context and meaning.” In my interview with him, Buchert illustrated the potential risks of ignoring typographic associations by having us compare our respective feelings about a specific typeface. “When you see Cooper Black, for example, what do you think of?” he asked me. I responded that I imagine printed invitations to birthday parties or paper alphabets hung in kindergarten classrooms. Rob, on the other hand, thinks of old tire shops and warm 1970’s advertising. When I asked him why that difference matters, he responded with one word: “isolation.” Rob continued, “we don’t have a group experience when we create new meaning, that is the theme of the digital world. … We can't develop these lines of cultural unity.” Rob does not personally believe that associations are exclusively fixed, but these ideas essentially represent the opinions of those who do.
In a sort of hybrid of both perspectives, some argue that typographic meaning is not inherent, but instead that we assign meaning to type, an assignment which is perpetually fluid. This is Waldeck’s ultimate opinion as evidenced by her claim that “since typography precedes nations, it does not necessarily convey national identities. Nationalist commitments may attribute false historical origins to typefaces and restrict design to national borders that do not correspond to the cultural ones.” I personally agree with this perspective, that associations are social constructs. Blackletter characters, one of the earliest categories of letterforms, serve as perhaps the best evidence in support of this perspective. These typefaces that originally were inspired by Catholic monks’ handwriting were eventually commandeered by the pervasive printing of Protestants—the most hostile theological opponents to Catholicism. As Waldeck explains, “in the [sixteenth] century, printing with [“roman” letters and “blackletters”] stamped the difference between Catholics and Protestants. Roman letterforms prevailed in the Vatican, whereas Protestant publications [usually had] blackletters.” Then, in the early-twentieth century, “nationalism led blackletters to be considered a cultural heritage of the recently unified Germany;” it began to represent Nazisim and the dogmatic propaganda promoted by that regime’s books and ephemera. Ironically, Hitler later tried to reverse the cultural appeal he had made to blackletter, instead referring to Fraktur as “Jew-letters.” Needless to say, his individual effort had no lasting effect on blackletter’s historical associations. The next most prominent change came only a few years later as “artists, products, and fans of certain music styles” in the mid-twentieth century began to “choose a particular kind of type for purposes of marketing and self-presentation.” The “typical [choice] for heavy metal” bands was “‘gothic’ typefaces and calligraphic scripts.” So, from our modern vantage point, what messages are we supposed to absorb when we see words set in blackletter? Catholic religious principles? Nazi propaganda? Heavy metal poetry? Yes, to all of them. “It doesn't matter where it came from,” said Valicenti. “We will always have an appreciation and a need for [historical] letterforms. … We'll use them for those [same] messages [with] their inherent meanings” as well as for newer, widely-accepted meanings. In his interview with Valicenti, VanderLuns skeptically asked, “can you see two typographic heritages together, and living next to each other?” To which Valicenti readily responded, “Absolutely! Just as you could hear in one song a live string quartet combined with a fuzz-tone guitar. … The marriage is fine, and they're fine independently. … We should all be free to eat from the buffet. And it's the fusion, those hybrids that we create, that makes the statements of the designer much more appealing and richer.” Not only is the “buffet” perspective a more accurate attitude toward typographic associations, but it is essential for us to adopt. If we wish to avoid creating type design that is not only dysfunctional, but actually harmful, then it is the substantial responsibility of the typographer to be informed of cultural and historical influences.
How can type design be harmful, you might ask? Consider the research conducted by American-born Chinese type designer Lilly Lin. For the thesis of her masters degree in Integrated Digital Media, Lin studied and reported on the negative effects that Chop Suey typefaces had on, not only her culture, but her personal identity. Lin explains that while Chop Suey typefaces “initially appear to be culturally Chinese, … upon closer examination they communicate English words and Western cultural concepts. … to any fluent Chinese reader, the Chop Suey font looks disjointed and doesn’t bear a resemblance to actual Chinese calligraphic characters.” Typefaces were designed within the “closed square theory” of culture, a theory which treats culture as a mold that exclusively shapes the characteristics of individuals and objects within clearly demarcated geographic and linguistic boundaries; said more concisely, culture is fixed. This is the complete opposite of the “buffet” theory of culture we have been exploring which instead defines culture as fluid and cross-compatible. Speaking from her experience as an ABC (American-born Chinese), Lin explains how the closed square theory of culture “has subversively played into how many common cultural stereotypes have been developed and perpetuated even today.” Typography, she says, is a key element in “physical placemaking,” which in turn “[establishes] cultural communities, which in turn facilitate strong cultural connections.” This progression has been the experience of various “Chinatowns” in urban centers across the United States. Western designers created ill-informed and inauthentic appropriations of Chinese lettering as a way of identifying foreign cultures within a western context, which in turn led to “an inaccurate view of … Asian people” and the reprehensible practice of “othering.” Speaking on a more personal level, Lin recalls childhood experiences with microaggressions at school. In one instance, a classmate asked Lin where she was from. When Lin responded “Indiana, just like you,” her classmate responded, “No, where are you really from?” Lin warns that “these details may not seem violent at first … but the sentiments and the misconceptions they are tied to fester and grow and can manifest in violence.” Thus, it behoves typographers to be thoughtful about the letters we use and the associations we employ, lest we inadvertently feed the “system of stereotypes that enables a cycle of violence.”
Let’s review by answering the questions I posed at the beginning of this section. Do typefaces have inherent meanings? Their meanings are not “inherent,” per se, but they do hold remarkably strong “polyvalent” associations.How does the answer to this question change the way we use and think about them? In what ways does our use of typography impact the culture and individuals around us? In short, the polyvalence of type hopefully opens our eyes to our responsibility for being informed and respectful of typographic heritages. Even the every-day ill-informed choices we make can have harmful effects on those closest to us. Multiply those choices at the societal level, and the results can be culturally destructive.
Part II—Our Typographic Future
The documentary film Farewell etaoin shrdlu ends with this line, spoken over the sound and footage of a pair of knardled, typositor’s hands, typing away at a keyboard: “But despite automation, computerization, and the continuing advances of electronics, the central factor is still the work of the human brain, the work of human eyes, and the work of human hands, in creating that powerful element of communication: the printed word.” Having already spent hours researching the history and future of typography for this analysis, I could not help but chuckle cynically, thinking to myself, but is it? Is the human brain still the central factor in creating communication? Living at the outset of what people are calling “the intelligence age,” that question feels harder to answer than ever before.
However, now having carefully examined four major developments in typographic history, the varied responses of their contemporaries, and both the cultural and individual effects, perhaps we can begin to answer those same questions of our own day and time. What has changed? How are people responding? And what effect might these changes have on letterforms, culture, and identity? As with our analysis of the typographic past, the question of culture and identity will serve as the climactic portion of our AI discussion—the “so what?” of it all. Sure, AI is here, and lots has changed, and lots of people have lots of feelings about a lot of it, but why does that matter to you and me? Well, great question; we’ll get into that.
AI AI AI AI AI ...
AI AI AI AI AI … AI. It seems to be all anyone is talking about.
What Has Changed?
As far as technological innovations go, the number of discussions about and applications of artificial intelligence technology feels like it is growing by factorials. Although, machine learning is not a new technology by any means; the ability for computer programs to interpret inputs and make accurate predictions has been around since as early as the 1960s. Essentially what has changed is the resilience of our hardware, the sophistication of our learning models, and the amount of data we have available to feed those models.
The general public became rapidly aware of the implications of these changes with the release of OpenAI’s DALL-E in 2021 and ChatGPT in 2022. I remember exactly where I was and what I was doing when I first heard about these two products. The first generated image I remember seeing from DALL-E was the Spotify profile picture of one of my good friends from high school. He had generated a grungy, psychedelic image of a green and purple monkey, and he posted the image on his Instagram story, telling his audience about a crazy website that could generate any image you wanted in seconds. I promptly found the website and began generating images of manatees floating through space in the style of Van Gogh’s “Starry Night.” As for ChatGPT, I was sitting in a lecture at school when my brother texted in the family group chat a poem he had generated about our nation’s political state. I immediately began submitting my own prompts to the site, asking ChatGPT to come up with retellings of scripture stories in the style of William Shakespeare. I remember feeling very similar to how April Greiman felt about the early days of the Macintosh:
What I really miss now are the great accidents that happened when I first started working on the Macintosh four years ago. … [It] threw me into an area where I wasn't so much in control anymore. I could do things that I wasn't able to do by hand. Accidents, messy things, kept happening, … opening up whole new roads of possibilities that hadn't been heavily trod upon by other designers.
The world has since moved beyond those early days of innocent childlike wonder and play, and the responses to and discussions surrounding generative AI have become much more complex, not at all unlike those of past tech revolutions.
How Are People Responding?
Honestly, there are a shocking number of parallels between how people responded to historical typographic innovations and the AI-powered generative tools of the design world today.
Like the fifteenth century scribes who feared how the printing press might lead souls into sin, or like the Denison Review author who viewed the Linotype as the most important invention second only to electricity, many today have reacted to AI technology in extremely apocalyptic and salvific terms. Sundar Pichai, CEO of Google, was quoted on CBS’s “60 Minutes” as claiming that AI is “the most profound technology humanity is working on—more profound than fire, electricity, or anything that we have done in the past”—as a business, Google has certainly been acting like they believe Pichai’’s theory. On the opposite extreme, Elon Musk, founder and former board member of OpenAI, has consistently expressed concerns with Google’s mission to build what he describes as “digital God.” These responses may sound excessively existential, but there is no doubt that this powerful technology, as with any technology, if left in the wrong hands could be used for nefarious and destructive purposes. Admittedly most of those purposes have more to do with nuclear warheads than they do with printheads.
Some of the more positive (and more rational) responses to AI have been a general excitement for the creative possibilities and for the prospect of eliminating tedious tasks. Many designers, like the fourteenth-century anonymous scribe, have expressed in effect, “thank God, thank God, and again thank God” for the elimination of this or that onerous design task. Even in writing this paper, ChatGPT has served a priceless role in helping me to quickly and accurately find the books and articles I’ve needed. Others, like NYU blog author Andres Fortino, believe that the intersection of design and AI will lead to “limitless bounds of imagination.” In reference to one artist's AI-augmented creative process, Fortino described it as evidence of “the boundless possibilities that arise when artists embrace AI as a tool to enhance their creative expression,” also referring to AI’s potential as being “transformative … in the realm of visual art.” Of Bennett Miller’s AI-generated art exhibition in the Gagosian, author Benjamin Lebatut wrote,“while some consider that tools like AI will only take us farther and farther away from ourselves, Miller’s work proves that this is not always the case. There are currents that flow back in time even as we race forward. There are unknown and unsuspected uses for even the most soulless devices and technologies.” This is very similar to VanderLans feeling about computers, that the technology brought about a “return to primeval ideas,” enabling designers to experience that childlike wonder once again.
Of course, there are a fair number of reasonable fears associated with AI entering the design industry, and every single one of them are fears we have seen before. Similar to Filippo de Strata’s opinion of printing and Eric Gill’s reaction to automated typesetting, some have expressed concerns about seeing a decline in the quality of design and the adverse effects that decline may have on the rising generation. Others fear the threat that AI poses to their jobs in the same way that laborers staring down the metaphorical barrels of the Linotype and phototype machines felt threatened. Still others are worried about how AI will fit within the world of copyright. The questions raised by the digital age remain unanswered; all of a sudden, Matthew Carter sounds regrettably prophetic from his 1989 vantage point, claiming that “unless the law provides some redress, the situation can only get worse.” Furthermore, there is an undeniable pressure to adopt AI technologies (as has been true of every technological advancement we have reviewed). Every CEO in Silicon Valley is scrambling like a person desperate for air in their rush to build an AI product. As a young inhabitant of the twenty-first century, I am undoubtedly biased, but I have never seen such frantic efforts made by so many in the corporate world to copy what everyone else is doing. This, in fact, became the primary focus of one of my Summer internships on a design team in New York; “everyone is using AI, and the stakeholders want to know, how are we going to use it? How is AI going to save us time and money?” I frankly was burnt out by the end of that Summer, and I resumed my college classes the next fall vowing to never touch Midjourney again. I have since rebounded to what I think is a healthier perspective: a sense of obligation to stay up to date and experiment with new tools as they come out. As Scheid said, I believe it is my responsibility and privilege as a designer “to explore, whenever possible, the graphic potential of new technologies.” Of course, it should come as no surprise that AI tech has also been the recipient of some incredible sums of investment money in the hopes of making fabulous returns. Just two months ago (as of my writing this), OpenAI achieved one of the largest venture-backed funding rounds ever at $6.6 billion, bringing its valuation to $157 billion. I know next to nothing about money, but those sound like frighteningly large numbers. Eitherway, hopefully I have adequately illustrated the point here: the fears surrounding artificial intelligence in the creative industry are the same fears that creatives of every generation for the past six centuries have expressed in response to the disruptive innovations of their day.
Just like what we learned from Emigre’s eleventh issue, I believe the correct way to frame our fears (and bridle our excitement) is to recognize that true creativity is the product of decision making. Several thinkers today have asserted as much. I found Ted Chiang’s reasoning in his article “Why A.I. Isn’t Going to Make Art” particularly insightful. Chiang writes the following:
Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. … If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made … which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry … which produces a highly derivative story. In neither case is it creating interesting art.
The same criticism was made of the Macintosh, as we have already examined. Designers worried that everything would begin to look the same, using the same defaults, resulting in what Suokko called “a great deal of uninteresting and uninspired typography.” But as we discussed, the opposite was more likely true. For those who treated digital type as a tool rather than a prescription, “the possibilities [were] multiplied.” The same can be said of AI today. The endless variations of “will AI replace our jobs?” that I used to hear more regularly have now been replaced with this new mantra: “it is simply another tool.” At the end of the day, it is the decisions we make as designers that matter. I appreciate the way type designer James Edmonson said it during our informal interview not long ago: “I liken it to something like wine—the good stuff is always going to have effort and story and emotion.”
What Effect Might This Have?
Before talking about the potential effects of AI on typography and culture, realize that this discussion basically requires us to predict the future—a notoriously difficult thing to do. “This isn’t an uncommon observation,” according to a blog post written by tech analyst Benedict Evans back in 2017. “Plenty of people have pointed out that vintage scifi is full of rocketships but all the pilots are men. 1950s scifi shows 1950s society, but with robots. Meanwhile, the interstellar liners have paper tickets, that you queue up to buy. With fundamental technology change, we don't so much get our predictions wrong as make predictions about the wrong things.” Therein lies the risk of trying to predict our typographic future: we will tend to “ask the wrong questions.” Keedy illustrated this same principle back in 1989 with an example from the automobile industry. “The first cars looked like horseless carriages,” he explained, because that was the form people were familiar with. We run the same risk here. Who’s to say that we will even use writing in the future? What if all of our communication needs can be met by exclusively using voice commands? I feel silly even posing that hypothetical, knowing that such theories will inevitably age poorly, especially in light of Jarvis’ opinion that there is no “more hubristic and fraudulent self-anointed job title than ‘futurist.’” Well, call me a fraudulent self-anointed futurist then, because that’s what I’m going to try to do.
As for how AI could change type design, or at least how I hope it might, we may actually be able to make some reasonable guesses. Firstly, I don’t anticipate the democratization of type design slowing down anytime soon. I recently heard a guest lecturer in one of my classes describe the democratization of technology as a function of expertise and cost over time: as time goes on, the expertise and cost required to use a certain technology diminishes. This tracks with what we have observed from our analysis so far, so it would not surprise me if AI tools remove even more barriers to the type design world. Exactly how they might do this seems more difficult to predict. I think we can safely say that AI will not eliminate the type designer's responsibility to make decisions, but in my conversation with James Edmonson, he shared a few ideas that I found to be relatively reasonable. Edmonson said, “if [AI] can take our typefaces and spit out Japanese characters perfectly, that would be tremendous.” He personally doubts this level of automation could realistically happen, which makes sense given the number of decisions required to accomplish such a feat, but the future has consistently subverted the expectations of the past, so I’m not ruling it out quite yet. His second idea seems much more achievable though: automated kerning. What a miracle that would be! In my mind, the job of kerning typefaces requires just enough pattern recognition and not too much critical thinking for an AI model to complete it successfully. Automated kerning would undoubtedly democratize type design even further.
Secondly, as with the advent of digital design, I think we can expect homogenization in AI design to fade within the next few years. AI-generated images from platforms like Midjourney are probably the greatest offenders when it comes to the plague of “sameness,” but of course the type design tools don’t perform much better. Hopefully, once people get tired of seeing the same soft, contrasty, square images of people with six fingers, we’ll begin to see some truly powerful and customizable tools enter the market.
Up to this point, I have barely touched on the topic of AI-generated typefaces, and that is not because of oversight. I wouldn’t blame anyone for sincerely wondering, “when will we get on-demand, generated fonts that work?” Well, when I asked Edmonson this question, he responded with a reality check that quickly revealed the problem with that question. “Who is actually going to be able to produce good AI fonts?” he asked rhetorically. If you think about it, there are really only two serious candidates: Google and Adobe—the companies with large enough type libraries and deep enough pockets to make it happen. But they are quite unlikely going to ever make those tools because there is simply no demand for them. “If people want generative AI, they don’t want fonts—they want it to work like a Canva template. They just want the stuff to be there and be there quickly.”A case could be made for the opposing view, but when you look at the millions of typefaces available online today, it’s hard to imagine anyone needing a generative type tool. What would be much more likely, in my opinion, would be an AI-powered search tool that could reference every single typeface available on every type foundry website in its response to a natural language query—that would be a million times easier to build than a tool that generates typefaces on the spot with consistent optical corrections, perfect counters and apertures, and flawless weight consistency. So, as boring as it sounds, I don’t anticipate spectacularly futuristic changes to our time-tested latin letterforms anytime soon.
So What?
Finally, let’s discuss the question of how typographic AI will affect our culture and identity. This is where our discussion becomes most relevant to our daily lives and interactions with other people. In short, I believe that AI’s disruption of the typographic industry poses three major risks to us today: 1) a declining ability to communicate effectively, 2) a susceptibility to the turbulence of cultural waves, and 3) a weaker sense of self and sense of community. However, I believe that our in-depth analysis of our typographic heritage, both by virtue of that analysis and by its fruits, has granted us the tools required to deal with these risks.
Firstly, I believe that understanding our typographic heritage, as designers and as people, improves our ability to communicate. Jarvis identifies the diminishing importance of words as the primary threat to this ability. He offers as evidence to this claim the decline in value placed on “authorship and ownership of content, … the legal and political battles over the enforcement of copyright,” and the migration of communication from text to “images, moving images, and modern ideograms: memes and emoji.” Buchert shared a similar example with me, describing how his young daughter just uses the microphone button on his phone instead of typing or writing. He is already noticing a decline in her ability to communicate ideas effectively and intelligently. The simple solution to these risks is to take ownership of our education. Consider Valicenti’s experience as an example of how this initiative can flip the script:
Because I didn't get the design experience that most students got in school, I would invite myself to printers and learn to strip on the weekends. This experience changed my way of designing. You're a better specifier when you know what's going on beyond your drawing board. Most designers, who have no idea what happens after their mechanicals leave the office, make the worst specifications and unnecessarily expensive design decisions.
Valicenti, by taking charge of his own learning, was able to improve as a type designer; he understood the larger context in which his type would be required to hold up. The same can be true of type designers in the intelligence age today. Moreover, a more diverse and holistic education in context can enable us to be less prejudiced in our communications. With a more complete understanding of typographic associations, we will be able to avoid feeding harmful stereotypes like those that have been abetied by Chop Suey typefaces in the United States. Essentially, type knowledge will make us smarter and kinder.
Secondly, I believe that understanding our typographic heritage will make us less susceptible to cultural waves. We live in an age of moral relativism that has, sometimes innocently, blurs the boundaries that delineate truth. This blurring has only worsened because of photo-realistic image generation and digital deep fakes of people’s likenesses. Social media channels, which in many instances have proven to be valuable news sources, are now under greater suspicion than ever before as AI bots flood our feeds with shrimp Jesus and other nonsense. The parallels between our current situation and the early days of print are remarkable. “In its youth, print was seen as less reliable than what we would now call rumor. With word-of-mouth, one could judge the source ... Print, on the other hand, was new and suspicious because its provenance was opaque.” But today, as Jarvis points out, “the situation is reversed: print conveys authority while content and conversation on social media are regarded as unreliable rumor, and users there are viewed as naive and inexperienced speakers.” This puts truth-seekers in a difficult position. As Buchert said in our conversation, “we are essentially sheep in this digital world. We are so reliant on corporations. Brands run the world.” We have become objects to be acted upon instead of agents to act. Again, the cure for this ignorant condition in which we may discover ourselves is education. Those who have sat at the feet of history will be best qualified to safely navigate the future. Typographically, our knowledge of associations will help us to decode the branded messages presented to us and will enable us to make our own informed decisions about what is true and what is marketing slop.
Finally, empowered with the ability to communicate effectively and withstand attacks on truth, our historical typographic education will foster within each of us a stronger sense of identity and community. AI technology, if not employed properly, might produce the opposite effect, stripping us of our individuality and exacerbating the already pandemic-like levels of isolation of the intelligence age. As I shared previously, Buchert forewarned about the risk of isolation: “We don’t have a group experience when we create new meaning, that is the theme of the digital world. … We can't develop these lines of cultural unity.” Like the biblical confounding of languages at the tower of Babel, a lack of regard for typographic associations makes us incapable of communicating and forming connections with one another. We risk contradicting ourselves here, however; did we not establish earlier that diversity makes us better and more informed designers? The truth is we need both. As I explained before, typographic associations run deep, but they are polyvalent. Hence the need for a rich typographic education. If we embrace the diversity of relative interpretation but remain ignorant of cultural history, we risk appropriation and the perpetuation of stereotypes. If we reject relative interpretation in favor of recycling the same dogmatic associations, we lose out on the cultural health and beauty that come out of differences. Only by educating ourselves typographically can we create a visual world that respects both collective culture and individual identity.
I will end with this thought: there is danger in living at the extremes, and there is beauty found in the balance. Personally, I have found incredible healing as I have learned more about and engaged in the activities of my typographic heritage. There is something meditative about stepping away from the computer screen to set metal type by hand and to roll a sheet of paper across its inked face. The image never comes out perfect; it can tend to print with a salty texture or even a little bit of ghosting—but that is an essential part of its character and identity. Allowing and celebrating such imperfections has granted me greater patience, enabled me to avoid burnout, and ultimately helped me to create a better visual world. May the same be true for each of us as we carry our typographic heritage through the doors of possibility to our typographic future.