What is the Most Important Skill Mathematicians Bring to Collaborative Work on a Wicked Problem?

During the 1990s, my main research project was an applied one on workplace communication, where I collaborated with a social scientist (wearing her ethnographer hat) who, in addition to her doctorate in sociolinguistics, had obtained a second PhD in computer science. (Much of our joint work was collected together into a 1996 research monograph, Language at Work.)

When we presented our work at research conferences, we would typically each handle about half the presentation. After the first few such events, I asked her why, when I was presenting and responding to audience questions (while she sat in the front row), she always took detailed notes of things I said. After all, this was joint work, and we were intimately aware of each other’s contribution. Why the notes?

“What I am capturing,” she explained, “are the things you say as side-remarks in passing, and your off-the-cuff reponses to audience questions.”

“What I am looking for are the things you know that are so obvious to you, you would never think of putting them down onto the page or in a slide deck.”

She was, in other words, going after my implicit knowledge, the expertise I had acquired over many years as a mathematician. The stuff that, if asked about it, my instinct would be to pass it off as “obvious” or “trivial”. And for a mathematician, that’s exactly what it invariably was.

As each of us advance in mathematics, our focus is on what we don’t know, the things that are just out of reach. Turning the unknown into known is what gives us the adrenalin rush of research discovery. Yesterday’s discovery quickly becomes today’s “obvious”. We take inner-pride in being able to describe a result we struggled hard to obtain as “trivial.” (That’s a bit of an exaggeration, and it applies more to our lemmas rather than to the theorems, but there’s a big grain of truth there.)

This is, as we all know, particularly acute in mathematics. Mathematical truth and knowledge are binary; we go from ignorance to knowing in a heartbeat, albeit usually only after a long, seemingly fruitless struggle.

In all those research collaborations from around 1990 onwards, I had the same experience. Mostly, my attempts to apply my mathematical knowledge seemed (to me!!!) to run up against an impenetrable, unsurmountable wall of complexity. As the project neared the end of its funding, I would work with my collaborators on a final report. What we wrote would make sense, but it always seemed to me that my contribution had been at best minimal, and in my perception, irrelevant. I was never able to apply the sometimes awesome power of any of the mathematical tools at my disposal.

Yet, time after time, my collaborators would say my participation had been essential and that I had contributed throughout. As a result, I gradually came to accept that in a collaborative project, there is great power in having at least one team member who approaches everything as a mathematician. The way we frame—or try to frame—each stage, the kinds of questions we ask, and the way we try to answer them, is in itself highly valuable.

As mathematicians, we crave solutions; by which we mean correct, complete answers. Ours is a black-and-white world. But most problems in the world that people want our help with are on a continuous grey scale. The main value we bring to those problems is the years of experience we accumulate doing mathematics. The mathematician on the team most likely does nothing heroic. (Insights like the Page Rank Algorithm that launched Google are extemely rare.) There is however a heroic mathematical character: mathematics itself. Its power can have an effect simply by including at least one team member who is an accomplished mathematical thinker.

This, incidentally, explains why many of the most successful researchers in the social sciences were undergraduate mathematics majors. (Just check out their C.V.s) It’s not that they apply the mathematics they learned; rather, their undergraduate experience studying mathematics shapes everything they do, to good effect in their later, “non-mathematical” work.

For those of us in mathematics who find satisfaction in that kind of interdisciplinary collaborative research, there’s no great “Aha!” moment to celebrate. You can’t say with any certainty what “bit” you did; there was probably no moment when you found yourself able to apply a specific mathematical technique and have the team applaud you. Wicked problems are not solved; they are explored in search of insights. In that world, you take your satisfaction from being the one who simply drives the mathematics bus that, in ways you likely cannot identify (though others on the team sometimes can—see the vignette velow), helps the team arrive at those insights.

That, basically, is the point I wanted to make with this month’s post.

 

FOOTNOTE: Case Study

The above post arose from a project-planning session I took part in during a month’s stay (just ended) as a Visiting Professor in Denmark. Putting those thoughts into the above 700 words (the length of a classic newspaper column back in the print days—another implicit skill I acquired over many years that is now obsolete) led me to reflect on the somewhat tortuous path that led me to work on the kinds of multi-disciplinary project I referred to.

So, since Web pages have no bottom and can be abandoned by the reader at any point, here is my story. This was all new (and surprising) to me at the time, so I assume some may find it helpful. My guess is that those whose careers have followed a similar path to mine will see parallels. For anyone who has had a similar experience, please email me and tell me your story, and I’ll see if I can put enough together to write a follow-up piece. Individual stories are just that, but I think there is an important “there” there to be flushed out regarding mathematics-in-the-wild, with implications for mathematics education. (Implications that both sides in the ongoing US “math wars” can likely cite as support.)

When I retired from my university position at the end of 2018, it was exactly fifty years since my career as a mathematician had begun (1968 being the year I completed my bachelors degree in mathematics in the Summer and embarked on a PhD program in the Fall). My doctorate was in axiomatic set theory, at the purest end of what we refer to as Pure Mathematics. For me, that meant mathematics pursued for its own sake, with no concerns about how it arose or how (or even whether) it may or may not apply to the real world we live in—or indeed be used in other disciplines such as physics, biology, chemistry, engineering, economics, finance, and so on. Moreover, the mathematical objects I focused on in my PhD, “large cardinals”, had no counterpart in the real world, and only very rarely did the work of the large cardinals community interact with any other parts of mathematics. (When it did, the specialists in those areas usually showed little interest in our results.)

The “large” referred to infinities, infinities so large that they dwarfed the infinite sets of everyday mathematics, such as the set of all counting numbers (1, 2, 3, etc.) or the (literally) incomparably bigger set of all real numbers. The focus of the sub-discipline was the notion of infinity itself, a significant concept in set theory from the very inception of the field with the work of the German mathematician Georg Cantor (1845-1918). Cantor’s surprising proof that the collection of all real numbers was bigger than the set of counting numbers showed that our naïve conception that infinity is, well, “just infinity”, was wrong. If you want to count the sizes of infinite sets, you have to accept that different sets can give different answers. Just as there is no largest natural number (you can always add 1 to get an even bigger one), so too there is no largest infinity.

Throughout the 20th Century, mathematicians developed a rich theory of those “higher” infinities. For the most part, the results we large cardinals folks obtained lived in a walled garden that did not impact the rest of mathematics. But one part of mathematics, calculus, itself involves the notion of infinity, and occasionally a result in large cardinals theory would impact the more advanced parts of calculus (for example, measure theory).

Large cardinals theory did not, however, impact the real world—unlike calculus which impacts it big-time, and has done so since its invention in the 17th Century. Indeed, (differential) calculus was developed with real world applications in mind (initially to give precise descriptions of the motion of the planets). Integral calculus was invented far earlier, by Archimedes, who lived some two-hundred years before the Current Era, to compute the area and volume of various figures.

Noam Chomsky (b.1928). American linguist who was the first to approach language from a mathematical perspective. He first presented his ideas in a monograph written in 1955, titled The Logical Structure of Linguistic Theory, part of which he submitted as his MIT PhD thesis Transformational Analysis. MIT awarded him a PhD for his work, and his monograph was circulated among linguists on microfilm, but MIT Press refused to publish his monograph. (Springer finally did so in 1975.) Instead, Chomsky presented his work in book form in 1957 with the title Syntactic Structures, published by Mouton & Co. (Image shows the cover of the first edition.)

Towards the end of the 1980s, I started to look into the possibility of taking some (not all) of the tools we used in set theory and apply them to understand the way people communicate with one another. (The initial focus was communication by language, which was a big topic when the Internet was starting up.) In that, I was stimulated by attempts others had made to apply parts of (pure) mathematics to linguistic communication. (Noam Chomsky was the first.)

That work resulted in my receiving an invitation to spend the 1987-8 academic year at Stanford, where a number of researchers were doing similar work in a new, interdisciplinary research center (the Center for the Study of Language and Information – CSLI). The one-year leave I took from my position in the UK was extended to a second year at CSLI, after which I never returned to the UK, and spent the entire second half of my career applying ideas from set theory to study information and communication, working on projects for, in order, a large UK computer manufacturer, a multinational, European construction project, the NSA, the US Navy, and the US Army. (The three DoD projects were all in the aftermath of the September 11, 2001 terrorist attack on the World Trade Center and the Pentagon. At that time, the US was willing to clutch at any straw to find better ways to do intelligence analysis, and I was one of those straws.)

Those projects taught me a lot about how mathematics fits in with other disciplines when the goal is to solve large “wicked problems” (to use the technical term), about which I’ve written a number of essays in this blog. In the UK computer-company project, the wicked question was “Why did productivity go down when we introduced an automated information system designed to increase efficiency?” With the NSA project, it was, “What changes can we make to intelligence analysis to improve our ability to detect terrorist attacks before they occur?”

Almost none of that work I did resulted in published papers in mathematics journals. (The few exceptions were when I worked out a highly simplified mathematical example as an exercise to see what the mathematics would look like and what would be lost in the process.) Most of what I did was written up in project reports, academic papers published by CSLI, and some research monographs and books, all highly interdisciplinary.

It was my work on those projects that led me to realize how—and in what way—mathematicians (sic) can be valuable to society, particularly modern society. Notice, I did not say mathematics is valuable to society. We all know it is, and so does society, which is why it is an obligatory school subject the world over. Rather, I was referring to something else: a role we mathematicians can (and often do) play, often without being aware of it. (I certainly was not aware of this particular dynamic until I was well into my “wicked problems” work, after I’d decamped from the UK to California in 1987.)

In many cases, the real value of being an experienced mathematician, valuable both to the individual and to society, lies in the things the mathematician does automatically, without conscious thought or effort. The things they take for granted—because they have become part of who they are. This was driven home to me most dramatically in the years immediately following 9/11, when I was one of many mathematicians, scientists, and engineers working on national security issues, looking for ways to improve defense intelligence analysis.

My 1991 book, describing several years research by scholars at CSLI on the role played by context in information flow, resulted in my being asked to join a post-9/11 research team tasked with looking for ways to improve intelligence analysis.

My brief was to look at ways that reasoning and decision making are influenced by the context in which the data arises. Which information do you regard as more significant? How do you weight, and then combine, information coming from different sources. I’d looked at questions like this in pre-9/11 work—indeed that was the research on linguistic communication that brought me from the UK to Stanford in 1987, and by the time the Twin Towers came down, I had written two research books and a number of papers on the topic. But that earlier research focused on highly constrained domains, where the complexity was somewhat limited. The challenge faced in defense intelligence work is far greater—the complexity is huge.

I did not have any great expectations of success, but I started anyway, proceeding in the way any professional mathematician would. I could give you a list of some of the things I did, but that would be misleading, since I did not follow a checklist, I just started to think about the problem in a manner that has long become natural to me. I thought about it for many hours each day, often while superficially occupied with other life activities. I was not aware of making any progress.

Six months into the project, I flew to D.C. to give a progress report to the program directors. As I fired up my PowerPoint projection and copies of my printed interim report were passed around the crowded meeting room, I was sure the group would stop me halfway through and ask me (hopefully politely) to get on the next plane back to San Francisco and not waste any more of their time (or taxpayers’ dollars).

In the event, I never got beyond the first content slide. But not because I was thrown out. Rather, the rest of the session was spent discussing what appeared on that one slide. I never got close to what I thought was my “best” (i.e., least embarrassing) work. As my team manager told me afterwards, beaming, “That one slide justified having you on the project.”

So what had I done? Nothing really—from my perspective. My task was to find a way of analyzing how context influences data analysis and reasoning in highly complex domains involving military, political, and social contexts. The task seemed impossibly daunting (and still does). Nevertheless, I took the oh-so-obvious (to me) first step. “I need to write down as precise a mathematical definition as possible of what a context is,” I said to myself.

It took me a couple of days mulling it over in the back of my mind while doing other things, then maybe an hour or so of drafting a preliminary definition on paper. The result was a simple statement that easily fitted onto a single PowerPoint slide in a 28pt font. I can’t say I was totally satisfied with it, and would have been unable to defend it as “the right definition.” But it was the best I could do, and it did at least give me a firm base on which to start to develop some rudimentary mathematical ideas. (Think Euclid writing down definitions and axioms for what had hitherto been intuition-based geometry.)

The fairly large group of really smart academics, defense contractors, and senior DoD personnel in that meeting room spent my entire allotted time discussing that one definition. Not because they were trying to decide if that was the “right” definition, or the best one to work with. In fact, what the discussion brought out was that all the different experts had a different conception of what a context is, and how it can best be taken account of—a recipe for disaster in collaborative research if ever there was.

What I had given them was, first, I asked the question “What is a context?” Since each person in the room besides me had a good working concept of context—different ones, as I just noted—they never thought to write down a formal, mathematical definition. It was not part of what they did. And second, by presenting them with a mathematical (set-theoretical) definition, I gave them a common reference point from which they could compare and contrast their own notions. There we had the beginnings of disaster avoidance, and hence a step towards possible progress in the collaboration.

As a mathematician, I had done nothing special, nothing unusual. It was an obvious first step when someone versed in mathematical thinking approaches a new problem. Identify the key parameters and formulate formal definitions of them. But it was not at all an obvious thing for anyone else on the project. They each had their own “obvious things.” Some of them seemed really clever to me. Others seemed superficially very similar to mine, but on closer inspection they set about things in importantly different ways.

“Your work is not classified, so you are free to publish your results, if you wish,” the program director told me later, “but we’d prefer it if you did not make specific reference to this particular project.” “Don’t worry,” I replied, “I have not done anything that would be accepted for publication in a mathematics journal.” Which is absolutely the case. I had not done any mathematics in the familiar sense. I had not even taken some proven mathematical procedure and applied it. Rather, all I had done was think about a complex (and hugely important) problem in the way any experienced mathematician would. And that turned out to be enough.

Note: Some of the above is adapted from my Septemer 1, 2012 Devlin’s Angle post.