Understanding the CBGM: The Method (Part 1)

As we continue our efforts to understand the CBGM we now turn to chapter 2 of Wasserman and Gurry’s A New Approach to Textual Criticism: An Introduction to the Coherence-Based Genealogical Method. The beginning of this chapter sets the historical stage for the rise of the CBGM which is the focus of today’s blog. Lord willing, tomorrow we will explore a 30,000-foot view of the method itself and look at a couple examples.

Gerd Mink, to whom Wasserman and Gurry’s dedicate their book, is the pioneer and inventor of the CBGM. Our authors observe of Mink,

“What Mink wanted was to find a way to relate Greek New Testament manuscripts despite their wildly intermixed nature, a feature that makes it difficult to sort out their true relations. This problem, known as contamination, has plagued textual scholars for centuries. Without a solution, New Testament scholars have resorted to relating groups of manuscripts instead of the manuscripts themselves.”

Wasserman and Gurry, A New Approach, 18-19.

In other words, because the text of the manuscript is intermixed with other text-forms it is exceedingly difficult to determine what the immediate ancestor is of that individual manuscript and even more difficult to determine the immediate ancestor of a given portion of text. Thus, it was the practice pre-CBGM to group contaminated manuscripts into families, but Mink took it a step further and sought to group the various texts within the manuscript with the various texts of other manuscripts via coherence. This comparison and subsequent relation of texts from contaminated manuscripts gave rise to the concept of coherence as in the Coherence-Based Genealogical Method. In fact, Wasserman and Gurry observe

“Across the span of hundreds of years, such contaminations clearly happened again and again with the result that the New Testament ‘affords, beside Homer, the paramount example of a ‘contaminated tradition.'”

Wasserman and Gurry, A New Approach, 22.

By Wasserman and Gurry’s light while we have an embarrassment of riches in the NT manuscript tradition when compared to other works of antiquity we also, apart from Homer, have the most contaminated tradition. Our authors go on to speak of the “traditional method” or the method used before the CBGM,

“Such contamination presents a major problem for traditional methods of relating manuscripts. Known by the name of their most prominent proponent, Karl Lachmann (1793-1851), such methods depend on the principle that shared, distinctive textual errors imply shared origin.”

Wasserman and Gurry, A New Approach, 22.

But Wasserman and Gurry retort,

“Where the tradition is contaminated, one cannot assume that a shared error implies shared ancestry; instead, it may only imply multiple ancestry. At its worst, contamination can flip the historical relationship of manuscripts.”

Wasserman and Gurry, A New Approach, 23.

Again, like yesterday, I was surprised and this time at the content of the last quote. Lachmann has been dead for 170 years and we are just now realizing that shared errors might not mean shared ancestry? I thought textual criticism was a robust science where doubt is de facto assumed. Where is the doubt concerning their methodology? It’s not like the data has substantially changed over the last 100 years. Whether this is a pristine example of groupthink or not, I leave that to the reader, but the considerable lack of self-criticism on the part of text-critics about the text-critical method should not go unobserved.

At this point Wasserman and Gurry offer a series of examples via simple graphs in an attempt to help the reader wrap their head around the idea that a text is often made up of several textual streams. In other words, a manuscript can often represent its several ancestors via portions of the ancestral text appearing in descendent texts even though the descendent text may not be a direct descendent. Our authors put it this way,

“For instance, a young manuscript may have a very old text that may then be ancestral to the text of an older manuscript.”

Wasserman and Gurry, A New Approach, 24.

By focusing on the text of the manuscript we are able to see that each manuscript is a mut of sorts. It has many breeds of text (i.e., readings from a couple uncial and also readings from a several minuscules) united in one manuscript. In this sense, the texts of manuscript X have potentially many ancestorial texts recorded in yet other manuscripts – some written before manuscript X and some written after manuscript X. The way the CBGM determines which ancestor texts gave rise to which descendent texts and which do not is through coherence.

“Rather than relating witnesses deductively based on shared errors, it relates them inductively using the relationship of their variants as determined by the editor.”

Wasserman and Gurry, A New Approach, 24.

In other words, the traditional method began with the assumption that shared errors meant shared ancestry. Thus, when said error was encounter it was thought that the already known error type meant that the newly found manuscript had the same ancestor as all the other manuscripts that possessed that error. Now, with the CBGM, we are able to look at the texts, inductively, as aggregates, and begin to determine ancestry on a more aggregated level, perhaps even a word-by-word level. Still, the problem of contamination is not the only hurtle to be crossed by the CBGM. There is the problem of “coincidental agreement,” or agreement in textual errors but the error in text A did not give rise to the same error in text B, nor did B give rise to A. Or they have the same error, but the error came from two different ancestors.

“The CBGM takes a novel approach to such coincidental agreement. Rather than excluding any and all agreements that we think could be coincidental, the CBGM uses the overall agreement between witnesses to decide whether agreements are coincidental.”

Wasserman and Gurry, A New Approach, 25.

In sum, the CBGM includes more data and therefore more ancestors and more descendants thus what may seem coincidental may in fact not be.

On the problems of contamination and coincidental agreement it is the opinion of Wasserman and Gurry that the CBGM has done and will do a better job at identifying the coherence and relation of texts across a broader range of textual data than had heretofore been the case. Before concluding this post, I want to offer the following quotes to illustrate one point.

It is important to Wasserman and Gurry that you do not think of the CBGM as a computer doing the “thinking” of a textual critic. The human subject is in control of the data input and in large part, the data output. In other words, the computer tools are not replacing the textual critic’ subjective interpretation of the data, but the CBGM does increase the speed at which he can make those subjective interpretations. See the following:

“It is important to emphasize that this connectivity [of texts] is always determined in concert with our text-critical knowledge of the variant in question.”

Wasserman and Gurry, A New Approach, 25.

“…if the computer keeps track as it does in the CBGM, then it can begin to feed this accumulated data back to the editors so they can start to see trends in their own decisions.”

Wasserman and Gurry, A New Approach, 26.

“The direction of the relationship [i.e., whether A gave rise to B or B to A] is taken directly from the editors’ own decisions made at each place of variation.”

Wasserman and Gurry, A New Approach, 28.

“It is important to stress that the construction of these local stemmata involves the traditional tools of textual criticism.”

Wasserman and Gurry, A New Approach, 31.

“The computer never makes the decision for the user, not even when it offers additional data in the form of coherence.”

Wasserman and Gurry, A New Approach, 31.

See you all tomorrow, Lord willing.

Leave a comment