Creative Commons Attribution-Share Alike 2.0 Generic license, photo by Patrick Mackie
O ye’ll tak’ the high road, and Ah’ll tak’ the low (road)
And Ah’ll be in Scotlan’ afore ye
Fir me an’ my true love will ne-er meet again
On the bonnie, bonnie banks o’ Loch Lomon’.
In the course of any particular day, I am often approached with very general questions on topics related to technology transfer. I am expected to serve faculty, staff, and students as a resource for information. In particular, this is meant to ensure that they fully understand the issues and processes, and can more effectively engage in commercialization activities. Unfortunately, this means that the questions posed may range from the legal/technical “what is meant by ‘prior art’?” to the more esoteric issues associated with policy and decision-making, such as “how can we become more successful in spinning out new ventures?”
Of course, the latter sorts are the ones that keep spinning around in my head, since in many cases even I have difficulty with a clear, concise, answer. In large part, this is because I recognize the inherent problems with decision making in these situations. The real question being asked in some of the situations is “what is the RIGHT thing to do?”
At the heart of any decision is an effort being made to choose an action in pursuit of some goal. In order to do so successfully, the decision maker must:
- Clearly articulate the goal
- Identify options available for some active choice
- Understand the consequences of those alternative choices
- Evaluate other factors which might impact the final decision process
In this general sense, many decisions are conceptualized as a “cost-benefit” analysis, with the decision-making process focusing on the relative gain attendant upon a choice relative to the loss or “expense” involved.
Obviously, some decisions can be made using much simpler processes, such as a coin toss. Such decisions are often made by single person, on matters with little impact on either that individual or anyone else (yes, I would like iced tea to drink), and no complications.
Australian Rules Football match at Hyde Park, London, on 8 January 1944. Source: Australian War Memorial
This image is of Australian origin and is now in the public domain because its term of copyright has expired.
Other decisions, however, require extraordinary efforts directed toward fact-finding and analysis, along with multiple meetings of large groups of people who must conform to a formal structure for coming to a decision. This sort of decision is likely to impact larger groups, or have potential consequences that justifies investment of time and energy into making the best choice possible. Furthermore, this may involve individuals or groups with widely divergent opinions on what constitutes the “best” choice. Frequently, the decision is couched in terms of “right” and “wrong” such that there is an unfortunate attribution of possible fault and blame attendant upon the choice.
For decisions related to technology transfer, the process can be extremely complex. Choices may be constrained in various ways that are uncomfortable for those involved. The culture of academia can also play a larger role than is typically appreciated—this may amount to a set of “almost sacred” values held by some of the people involved. Frequently cited values in academia include “academic freedom” and “dissemination of knowledge,” but there are many variations along these lines, attributing a sort of “purity” of thought and intentions to the academic world. If technology transfer decisions are subjected to this sort of “good vs. evil” analysis, there will be individuals on both sides of the question claiming the moral “high ground” as it were. Suddenly, it becomes difficult to decide which position constitutes the “high road” and which the “low road.”
This is further complicated when you realize that perhaps no one knows which road—the “high” or the “low” one—is “better” in some absolute sense. As the Wikipedia article points out, there may be different interpretations brought to the imagery. The low road is sometimes equated with death, the soul of the departed Scotsman returning home, so the traveler on the “high road” may be making a more expeditious choice, but not necessarily a “better” one. There is a sort of moral judgment implied of course, but there remains room for speculation on the relative values demonstrated.
A recently published study [Philosophical Transactions of the Royal Society B: Biological Sciences, Vol. 367, No. 1589. (5 March 2012), pp. 754-762] confirms that the brain actually processes decisions differently if the choices involve “sacred” values held by a particular person.
Economic, foreign and military policies are typically based on utilitarian considerations. More specifically, it is believed that those who challenge a functioning social contract should concede if an adequate trade-off is provided (e.g. sanctions or other incentives). However, when individuals hold some values to be sacred, they fail to make trade-offs, rendering positive or negative incentives ineffective at best.
Obviously, this is true for university policy as well. As the authors in the study point out, policy decisions are seldom made with any degree of introspection on the possible difference in value judgments on the subject in question. The study concludes that there is a problem encountered when attempting to evaluate certain choices when viewed against deeply held convictions–the brain simply doesn’t process this sort of decision well. The entire process takes a different track so to speak.
Given this understanding, it’s easy to see how difficult the choices might be for university technology transfer. In fact, there may be surprisingly little effort made to analyze some of the “defacto” decisions made, under guidance of existing policies. In worst case scenarios, policy is wielded as a weapon (either against a faculty member or a potential licensee!) and the university (technology transfer) agents are derided for being “inflexible.” The university representatives feel it is a matter of taking a stand, holding positions consistent with their understanding of their institutional policy. They are held to this standard for making the decision. No matter how “reasonable” the tech transfer office might wish to be, university administrators can be hostile to suggestions that policies provide for flexibility—after all, what would be the point in having a policy then?
Thus, when I am posed a question that has at its heart, a possible conflict with the deeply held values inherent to the academic world, I tend to hesitate in providing answers. While not “sacred” on par with belief in a deity, there is sometimes an undercurrent of feeling that technology transfer involves something “not right” in the context of the presumed mission of the university. For some researchers, a decision has already been made that commercialization is “good” for the university, and much of the decision-making process defaults to typical “cost-benefit” categories. But there will be faculty for whom this kind of answer is insufficient. In order to work with them, to give real answers to their questions, it is important to realize that technology transfer is perceived as putting a price tag on something that isn’t even on the market. I do try to respect this position and in the course of doing my job, attempt to bring clarity and consensus to the decision-making process in technology transfer. Even when I’m tempted to just toss a coin!
In the social sciences, unintended consequences (sometimes unanticipated consequences or unforeseen consequences) are outcomes that are not the outcomes intended by a purposeful action. The concept has long existed but was named and popularised in the 20th century by American sociologist Robert K. Merton.
Unintended consequences, From Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Unintended_consequences
In popular discourse, people often refer to “the law of unintended consequences” when debating the merits or shortcomings of a particular decision, or course of action. Like any sufficiently interesting and yet complicated subject, it can be difficult to fully grasp what is really at the heart of such references. It recently struck me that this is, in part, at the center of the many debates on the proper role of the university in commercialization of scientific research. The initial inspiration for this post comes from a blog posting by Gerald Barnett (Research Enterprise, Oh, to be the happy dog again–side note, I try to read Gerry’s blog as often as possible and recommend it highly). In my experience the technology transfer office may be trying to accomplish goals that are not clearly defined or, as highlighted by this posting, are actually in conflict with some of the other goals of both the university administration and the faculty researchers.
It is all too easy to get swept up into the rhetoric on how the Bayh-Dole Act allows universities to “benefit” financially by licensing patents arising from federally sponsored research. From that basic premise arises a series of decisions and actions with consequences, both intentional and unintentional. As the Wikipedia article summarizes the concept, unintended consequences can be roughly grouped into three types:
- A positive, unexpected benefit (usually referred to as luck, serendipity or a windfall).
- A negative, unexpected detriment occurring in addition to the desired effect of the policy (e.g., while irrigation schemes provide people with water for agriculture, they can increase waterborne diseases that have devastating health effects, such as schistosomiasis).
- A perverse effect contrary to what was originally intended (when an intended solution makes a problem worse), such as when a policy has a perverse incentive that causes actions opposite to what was intended.
Note, this summary presupposes that not all “unintended consequences” are negative. However, these tend to be the consequences that are eventually cited as unintended—nearly every positive outcome of a particular decision or action has someone claiming it as his or her own particular intention. Unfortunately, many perceive this as a challenge to make “better” choices, and so to avoid the negative consequences.
Thus, the technology transfer offices confidently point to “success stories” from the cannon of technology transfer gospel as a model for their particular University to embrace—whether that is actually a viable alternative or not. Various University officials or administrators then look to the tech transfer “operation” as a source of alternative income, one that is desperately needed, and begin to expect ever-improving “metrics” in terms of licensing performance. If your office realized licensing income of $10M last year, what are the projections for the following year? What is the projection for next year, and the years after that? Why was there a drop of $2M this year versus the prior year?
If the technology transfer office produces alternative metrics—numbers of licenses, startups founded, patent applications filed, or issued patents—they are likewise put on a track to reproduce or improve those metrics year after year. Often, these become a level of baseline performance for a university versus the performance of “peer institutions” or “aspirational” peers. If your office can’t easily produce the metrics (and some of these are “easy” to produce, such as number of invention disclosures) what then? This can lead to an implied commitment to invest in the metrics—it’s important to remember that these decisions and actions are done at some cost. This might include an annual patent budget, aimed at filing a respectable number of patent applications each year. After all, so the argument goes, you can’t expect home runs if you don’t get a nice number of “at bats” or base hits. If you can produce enough cash, then you can produce patent applications, even issued patents. Funding a technology transfer office with a director, along with some support staff and maybe a couple of technology licensing managers, can represent a sizeable commitment of “overhead” funding.
This is when the tail might begin to wag the dog, and you learn, as Gerry points out, this doesn’t mean a happy dog. A lot of investment and effort is being put into producing a pot of gold at the end of the research rainbow, which means dealing with troublesome leprechauns and associated tricky business. Meanwhile everyone is still expecting those “smiles and fluffiness and public purpose, stardust and unicorns and glitter“ as it is nicely summed up in the original blog posting. While I’ve pointed out the limitations of analogies in a earlier posting (here) this does get a couple of points across! You’ve got to remember, not every fairy tale has a happy ending, and there is always at least one character that is on the losing side. This means someone gets stuck with the role of evil stepmother or nasty fire-breathing dragon.
It’s easy to keep with the script, stick with the stock characters and plots, rather than trying to put together a unique story. But this gets us back to that “law” of unintended consequences. For all practical purposes, it’s impossible that positive consequences will be presented as “unintentional” and so there are no orphans in that part of the fairy tale. As for the rest, you might get some grudging acknowledgement of partial responsibility for negative twists in the story, but mostly you get rationalizations from the parties involved. I like to think that we can work out some new plots for technology transfer tales, and maybe even endings with a few happy dogs. You may still have a lot of those unintended consequences of course, but hopefully the “intended” consequences will make those worthwhile.