I often begin a blog post with a “true confession” type of statement. Usually, I’ve read an article that made me think, sometimes the article is another blog (I only read a couple of blogs regularly, and I do so because they are the ones that make me think most!). I’m normally content to allow others to express insights and thoughtful commentary on a particular topic, and save myself the time and trouble of posting anything at all. However, I’ve found that reading another person’s blog is more often the “inspiration” for a posting of my own, and then I hesitate to post my own blog. I don’t really want to be derivative, thus, I post less often. So, I’ve managed to begin a blog, and then pretty much drop it for months.
Yet I find myself coming back, mostly because I feel there are still a lot of topics where there is an ongoing struggle to come to terms with some decision-making, especially related to academic research, intellectual property protection, etc. In the past weeks, I’ve done a bit of cursory reading on several topics that are part of some student team projects I help manage, under the umbrella programmatic theme of “Innovation to Enterprise.” The teams are each assigned a project from a faculty research group, and they are asked to explore the commercial potential of these projects. Most of the students are entirely new to the concept of “technology transfer” and have no clear understanding of new ventures or entrepreneurial activity. The students find it challenging to begin their projects, and they are frustrated when they realize that, more often than not, the results of the research are so tentative or nebulous that they can’t draw proper conclusions.
I once dealt with a company who expressed a similar frustration. They were working with researchers under an agreement that provided them with rights to review certain “Invention Disclosures” and gave them a set time for making a “decision” on whether (or not) they wish to license the “intellectual property” associated with the disclosure. At one point, I dutifully called for their response on a particular disclosure, as I was expected to make a recommendation for next steps based on their decision.
The CEO was a bit shrill when he replied, more or less, that there “isn’t enough information” to decide.
His response wasn’t unreasonable on the face of the matter–the disclosure probably DID NOT include enough information to make a “decision.” On the other hand, I was accustomed to making similar decisions with much less information. Perhaps I was less than sympathetic in this instance, since some company contracts lawyer had inserted these decision deadlines as a way to keep both the tech transfer office and the academic researcher on some kind of “leash” as it were. There were too many points at which the research path and the commercialization strategy diverged. The company wanted a nice comfortable “oversight” role in the research without being directly involved, at least until they could make THEIR decision.
The scientists were too busy making their own decisions–about publications, conference presentations, grant proposals–to bother with the “tech transfer stuff” as part of the mix. To complete the muddle, the structure of the tech transfer office, and the IP policy, greatly restricted the ability of the researchers to make their own decisions on that aspect of their projects. They didn’t spend much time thinking about intellectual property since it was explicitly not considered part of their “job” in any practical sense. So for the scientists, lobbing off a few half-baked invention disclosures might be their entire contribution to fulfilling the contractual obligations involved. Some researchers might unilaterally decide that commercialization isn’t a serious option, or at least not one they wish to pursue. Thus, they will go about their work without making an effort to consider intellectual property except within the context of research and publishing–plagiarism, that sort of thing.
It is, ultimately, the decision of the researchers to start the commercialization process. Even without the terribly drafted contract and commercial “partner” I wouldn’t have expected the research group to be super diligent about getting a properly drafted invention disclosure into my hands. When a researcher does make a decision to pursue commercialization, the existing academic infrastructure does not support that direction. How could you possibly justify putting time and effort into that doesn’t earn you the “rewards” of academia? Not always the case granted, as some researchers do manage to come to terms with the awkward system, and are interested and/or motivated to move forward with commercial development. Fortunately for my student teams, the projects they are working on fall into that latter category. Unfortunately, it doesn’t make the process any less opaque.
It’s like sporting event where all the researchers are players who start by playing one game, and at some mysterious signal, shift to playing another game with different rules. More confusing, a whole new set of players arrives, in the form of the technology transfer office, the patent attorneys, sometimes even corporate partners and investors. So the challenge is finding a way to make decisions that are consistent within the context of the situation, recognizing that many of the parties are still learning how to play a game with some obscure rules.
The researchers may be quite willing to work with a tech transfer office, or anyone frankly, who is willing to step in to make decisions necessary for the business development effort. That doesn’t mean they have made enough effort to understand the process so that their own work is directed toward that end. In general, academic research isn’t directed toward the same ends as business development, and even if it were, business development isn’t that easy either. If it were, everyone would step up and start their own company, develop wonderful products and reap the rewards. It’s obvious that even the most motivated entrepreneurs can enter the market with a new product to be met with indifference on the part of their desired customers.
My goal, as it has been since I took my first job in “technology transfer” 20 years ago, is to make sure that the decisions made by the people involved are the best ones they can make, with the information available at any point. This may result in decisions that aren’t consistent with certain institutional metrics for commercial enterprise–numbers of disclosures, stories of successful startup ventures–but hopefully those decisions are ones that best serve the goals of the research enterprise, which is, in the end, the goose that is supposedly laying golden eggs. (But I begin to mix my metaphors, or something like that, so will close for now).
Fat startup: Learn the lessons of my failed Lean Startup — Word Sting.by John Finneran, CFA principal of Word Sting
I wanted to comment on the post linked above, because 1) I am working a bit with the “lean startup” model with student project groups, and 2) I am always a bit skeptical of “flavor of the day” business jargon. I found it interesting that this was posted online shortly after the Harvard Business Review published “Why the Lean Start-Up Changes Everything” by Steve Blank. This release, in the May 2013 HBR, heralds the alleged “entry” of the lean startup concept into “mainstream” business concepts. While the two posts aren’t directly relevant, John Finneran’s brief description of his own experience in attempting to “implement” lean startup seems instructive at one level–even if “lean” is the right way to go, it might not be easy to do it “right.” It isn’t clear how much of the story might be due to incorrect implementation of the concepts, or whether the concepts are not broadly applicable to all situations. In my own brief experience, I hesitated to fully embrace the full lean startup methodology due to my own concerns along those very lines.
On the other hand, I chose to use the concepts in exactly the same context that Steve Blank implemented them–working with university students who are engaged with “first time” startup efforts. These students are engaged in learning activities, and not primarily in the startup activity (regardless of the rhetoric around the subject). I found the business model canvas and the very slick exciting MOOC of Lean Launchpad to be useful tools for students with little or no experience in business research. In doing so, I hoped to leverage the fame and popularity of “startup party” mentality, hoping this would be motivating to students. I also found this to be an efficient direction, since I don’t have a burning desire to set up my own “school” of thought on the subject. However, I am nothing if not practical, and for me, this model is just one way to have a standard methodology for asking the right questions.
In our student team projects (exploring real-world application of ideas to develop new products or new businesses) there are fewer of the downsides expressed in the “Fat Startup” article. The “potential clients” who are contacted understand that the “startup” is really part of a class project. The students, no matter how serious they are about moving forward with the new venture, still expect a ‘grade’ for their final efforts. For many of the projects, the effort is being made too early for real “customer” interactions. These students are doing research, and there is no doubt about it. In fact, my expectation is that the majority of the projects will result in “failed” experiments. I try to prepare the students for this by telling them so–but I also tell them they will learn very valuable and practical lessons that they can use when they later find the right path to a startup (or development of a new product).
These projects offer a form of “real world” experience, but maybe not so “real” as that. The reality is in the tasks, but not in the context. The so-called enterprise may be little more than a group composed of college friends and roommates. They generally fit this activity into their overcrowded calendars in the same way they might do for fraternity or sorority functions (whether service or social). I can suggest that they use the Lean Startup methodology for their projects since I find it has some solid elements of logic, but I’m not concerned they will rely on this effort to support themselves (or any dependents). And they aren’t in a position to make promises to customers, just take suggestions and see if they can find a path that looks promising…but they really are planning a future journey. Their project might involve “getting to the station” as their first “step” and they still have the option to take another train (or even consider an entirely different mode of travel, maybe a boat).
Don’t get me wrong–I wouldn’t suggest the model to my students if I didn’t find it valuable. I just want them to use their own judgement, and learn how to THINK before they make decisions. This includes the decision on whether, or not, to rely on something like the “lean startup” concept for taking that leap into the deep (and muddy) waters of entrepreneurship. University faculty are in much the same position as students, and they are being given direction to use the lean startup model in their own new ventures (for example, NSF Innovation Corps (I-Corps)). I feel this might be useful with some faculty, those who can truly put themselves back into “student” mode for the purposes of learning a subject (business or entrepreneurship) of which they can comfortable admit ignorance. It might not be applicable to all new ventures or to all faculty entrepreneurs.
The moral of the story? Just be careful about jumping on passing bandwagons.
Both of these short “editorial” pieces ask an interesting and somewhat philosophical question about the role of the university in student “entrepreneurship.” The linked articles (by Nicholas Thompson, editor at Newyorker. com and Stanford alum) question some of the more “intense” levels of involvement that many universities are embracing, with schools like Stanford as our role models. Now, I’ve always tried to resist using “what Stanford does” as a yardstick–I even included a nice little slide in one of my presentations, showing little lemmings jumping off a cliff with a cartoon balloon from one shouting as it fell “…well, Stanford does it!” But the urge to follow their lead is almost irresistible.
At its heart, this is nothing more than a bit of “peer pressure” since every university is looking to establish a reputation–presumably a “good one” too–showing how their own activity reflects the “best” new practices. In some cases, it may be near impossible to replicate these practices (we’re no Silicon Valley) so at least that provides some protection from following all the yellow brick roads “we” see. It’s easy though, to wish your own institutions of higher education could collect all the accolades that a school like Stanford receives.
After some reflection, I realized that the piece reminds me a bit of the book I’m reading” (audiobook format as usual these days, so “listening to” is more precise) “Antifragile” by Nassim Nicholas Taleb (click here for his website and here for the Amazon link to the book–note I am not an Amazon affiliate, just being helpful). Now, that is a bit of a leap, but I tend to think that way, so it’s not surprising if you know me very well at all!
In the book, which I confess to not having finished as yet, Nassim Taleb refers to a phenomenon he calls “lecturing birds on how to fly” and makes some very disparaging references to Havard Business school faculty (not Stanford faculty, but I’m betting he would do so if he wasn’t more centered on the east coast). In many ways, his argument is also along the lines of a sort of “chicken and egg” situation (even though chickens don’t fly). As I see it, a school like Stanford provides a lot of opportunities that “attract” very entrepreneurial students, very smart ones at that, and then the “university” manages to get those little entrepreneurial birds to fly. Many other universities then follow Stanford’s lead on how to teach their own students the Stanford way of entrepreneurship, since it has such great success.
If you look at it that way, it seems that some of the things critiqued may be simply circumstantial, and Stanford isn’t really “doing” anything except meeting some expectations of the students already there. Is this “harmful” to Stanford in some way, as Mr. Thompson is pointing out? Perhaps, but it may be more true to say that Stanford is being given credit for ‘causal’ influence that isn’t there. In some cases, maybe there is also a negative impact on students (or more precisely, on their education). Certainly, I think it’s a good idea to ask the questions…but is someone going to try to find an answer? Regardless, it serves as one more example for me to use when people try to propose the next “me too” model for my own programs or projects.
This is hard to resist–who among us doesn’t feel the need for validation that takes the form “well, I found out that X is doing things this way”? As happened regularly, I was once contacted by a colleague at another university (no names), who was engaged in a contract negotiation with a company. The company had cited my own university as an example of a peer who had “agreed to these terms” so she called me to confirm. One of the reasons she did so was that the terms seemed to be “less desirable” and she seemed to suspect the company was not telling the entire truth. Now, at this point, I didn’t want to get into exactly what my organization had (or had not) agreed to, since there were too many complicating issues to explain. But I did give her one piece of advice that seemed to help. I told her, “Just go back with–even if “University X” agreed to those terms, that’s not a good reason for us to agree.” As I explained, there might BE good reasons for her to accept those terms, but the company needed to work through those specific reasons with her–not just “by reference” insist that since another university agreed, it was OK.
Otherwise, I am continuing to “read” the Taleb book, and I hope that there are some more interesting issues raised. As I understand, Nassim Taleb is nothing if not interesting, and I’m looking forward to learning more about some of his insights on innovation.
This article argues that perhaps “we” (as in, the administrations of universities, and the government) are trying too hard to push research commercialization.The argument is primarily against special spending for programs and financial incentives to faculty encouraging them to pursue technology transfer. It is an interesting take on the question of university technology transfer…and I admit, I’m more than a little sympathetic to the views expressed. Which is unfortunate for me, since there are some who feel that my current role within the university is to find ways to “increase” commercialization levels on our campus.
Granted, I don’t consider this to be the essential role I play in this regard, but others would consider it a “failure” on my part if some metrics related to commercialization don’t “increase.” I tend to take a more balanced approach. I don’t want to “increase” commercialization, per se. I am working to reduce or remove roadblocks to commercialization when there is a good opportunity to take some research findings to market. There may be unnecessary difficulties–generally caused by either lack of experience on the part of faculty or administrative or financial issues that limit the perceived upside to pursuit of commercialization. The article does a good job outlining the failure of “additional” incentives, and points out that the natural incentives exist for those cases where there is a big payoff–and as I often point out to faculty, those tend to be commercialized in spite of so-called barriers to the process. Most faculty aren’t overly influenced by the money argument anyway, although it falls under the category of “great if it happens.”
I’m not sure the article does justice to another aspect of the problem however. There are instances where there is a clear benefit to taking the research outcomes and making them available in the “real world” but there is not a clear “profit motive” to do so. Faculty may be very interested in seeing the results of their work applied and in being part of the effort to make this happen. They are doing research ostensibly to help “solve” problems in many cases. Often, this leads to a lot of confusion as to how the university might proceed. There a quite a few options to consider, such as:
- Publish the results and make them “freely available.”
- Pursue some kind of “open innovation” strategy that includes a level of intellectual property protection (such as open source licenses for software).
- Try to pursue commercialization through entrepreneurship, which may not include licensing of intellectual property.
- Pretty much do nothing but sometimes think about it, etc.
All of the options while somewhat straightforward can lead to paralysis by analysis and other common forms of deadlock. Especially if the people involved (the research team) are inexperienced and unfamiliar with the process. The researchers have a goal that is consistent with technology transfer and licensing, but without all of the pieces coming together properly.
For example, their response may be to publish, but this winds up being unsatisfactory because there is no clear transfer of an idea into use. If there is a decision to hold off publishing, it may be that there is a naive belief that they will be able to enter into a commercialization relationship, and if they can’t it wouldn’t be fair if “someone else” got to make money off of the “idea” from the university. Conversely, potential partners may be confused as well, if there is not a clear IP position that can be evaluated so they can make a decision to “license or not license.” There may be some strategies that the faculty can pursue in these cases, but as the article in Business Week points out, adding incentives doesn’t solve them.
The article further references an earlier publication, by Richard Jensen of Notre Dame and Marie and Jerry Thursby of Georgia Tech titled “Disclosure and licensing of university inventions: ‘The best we can do with the s**t we get to work with.’” (click here for this paper). The Businessweek article notes the following (emphasis added):
The title, taken from a comment made by one of the licensing officers, sums up what happens when you give universities an incentive to commercialize additional faculty inventions but you can’t do anything to improve the quality of the inventions themselves.
Now, you can perhaps characterize those situations that I refer to above in this way–the “inventions” are not of proper “quality” to license. As a licensing manager, I can help faculty work through the problems and see if there really is a diamond in the rough, but we can’t make a diamond out of a piece of coal. But notice, a piece of coal can still be worth something. And it can still be important to someone, even if there is no clear-cut way to “commercialization” for that.
There are incentives for most licensing offices to work ONLY with faculty who have the “quality inventions” and no incentives for those same licensing officers to find a way to “work with” the so-call “s**t” as it were (note, I don’t approve of a licensing office using that attitude when referring to an invention disclosure–although I can feel a bit of empathy). So perhaps there is some room for universities to take some steps to encourage efforts at commercialization, and some of this may be in the form of incentives (although perhaps changes to incentives for licensing staff, not the faculty).
The moral of the story is that incentives aren’t necessarily the best answer in all situations, and as they say…be careful what you ask for because you just might get it.
Again, I find myself inspired to post here, after reading one of Gerald Barnett’s most recent blog postings, Five Defects in Persistent Readings of Bayh-Dole | Research Enterprise. Early in the post he makes a statement that really struck a nerve with me (emphasis added):
I made this list of serious defects in characterizations of Bayh-Dole in the academic and popular press. Why do these persist? It must be that there are folks who really want to promote defective readings of the law.
Why did this strike me in particular? It took me a few minutes to think that through. First I realized that my experience in technology transfer was a key factor. In the first few years of my career, I certainly mirrored exactly the same “reading” of Bayh-Dole because most of the training and mentoring I received was from people who understood the legislation in this manner. It is part of “our” professional identity to understand and communicate the essence of the legislation that was the foundation upon which our technology transfer offices were built.
After those first years, I became less concerned with any “essential” truths on Bayh-Dole as my job focused primarily on details–making decisions on patent applications, reviewing patent prosecution correspondence, meeting with new inventors, negotiation of licenses (or executing the more common agreements, non-disclosures, MTA’s, etc.). Professional development centered upon attending conferences and workshops which either 1) repeated the conventional wisdom on Bayh-Dole, or 2) focused narrowly on the technical side (patent law, etc.). It isn’t unusual for anyone to get caught up in the day-to-day activity and feel little or no interest in reflection upon something that has ceased to be a question.
Unfortunately for me, I am often “accused” of over-thinking on pretty much any issue or subject. Also, I am more inclined to question conventional wisdom than most. I am not particularly impressed with arguments that have little in the way of DATA to support them. Of course, I wouldn’t t let that get in the way of the daily routine. I managed to continue with the meetings with inventors, whether (or not) I “believed” in all the myriad “assumptions” of the legal foundations and how the legislative underpinnings were supposed to work.
Admittedly, I was sometimes at a loss when asked to really “justify” some of the policies and processes I was expected to operate under. I could honestly agree with, or at least provisionally consider, many of the opposing viewpoints–for example, why did the university policy require faculty to assign all patent rights to the university? How can the policy state that faculty will own copyright in most of their work, but then add provisions where the university claims a specific work?
If I had reservations about whether these issues were being dealt with “rightly” I mostly kept them to myself. After all, if I raised doubts with the wrong person, their conclusion might be that I didn’t really understand my job, especially if they bought into the conventional thought on the subject. In a few instances, when I did at least highlight some of the more problematic points, there was little interest in making changes–and for a variety of reasons. For example, policy was too difficult to change, and no one understood it anyway. Besides, there I was again–thinking too much, and asking too many questions.
This gets to the heart of why I pretty much left well enough alone. It was easier to keep on track with everyone else, and I didn’t really have the energy (or the commitment) to do more. I focused on the technical side, glossed over those pesky “principles” and just tried to approach each projects from a common sense point of view. It could be fairly uncomfortable, but I’m generally more tolerant of that sort of ambivalence.
With respect to Bayh-Dole, I’m willing to bet that many others in same position will either 1) avoid questioning conventional dogma, or 2) insist upon the innate truth of said dogma. Perhaps this does result in their promotion of “defective readings of the law.” I may find it regrettable, but I also find that I can sympathize with them to point. It’s hard to come to an unconventional conclusion and attempt to stick to it–especially if there is some chance that you will come to an incorrect conclusion, or at least a certain percentage of people will judge you to be “wrong.” Even when I do find myself coming to unconventional conclusions, I often default to discreet silence and not publicly expressing my doubts. However, by my silence, it’s possible to conclude that I am contributing, albeit passively, to promoting the opinions that others are more aggressively voicing.
This was the point that made me pause in reading Gerry’s original blog entry. What, precisely, do I really “believe” and what should I then do about it? I have to think more about the subject, and that I will leave for another posting.
Infographic Created by: ClinicalPsychology.net
This posting isn’t particularly related to technology transfer, and not even strictly speaking to intellectual property. But I found this infographic to be quite interesting, and it struck a nerve with me, especially due to the recent news coverage of Jonah Lehrer‘s unfortunate decisions in writing his latest book (for reference and to read more about the story click here). There is a common theme to both that leaves me troubled and sympathetic at the same time.
The troubling aspect is that there are certain individuals who apparently think some “rules” apply to other people, but who are more flexible when applying these to their own situation. This is true even for some our society would consider the most well educated and intelligent. Perhaps they feel their own intelligence is a license to make a “judgment call” about whether, or not, a rule is a good one. Maybe they even convince themselves that they are correct in the “spirit of the law” as it were, and that others don’t comprehend all of the nuances of their “particular” situation. They may claim to see no rational justification for a particular rule.
Nonetheless, the rules, or the laws, exist for a reason.
Even IF you can make an argument that a particular rule or law is inherently flawed, that doesn’t justify making a decision to break it unless you also acknowledge this is what you are doing. I have no problem with someone taking a stand against an unjust law or an unreasonable and arbitrary rule. But when the rules or principles of good scholarship are violated by a researcher as described in the infographic, it undermines the foundation on which other researchers are attempting to build new knowledge. Researchers who do this are intentionally making choices to enhance their own professional status, to add achievements or successes to their reputation that are fraudulant. In this situation, researchers are allowing self promotion to trump the social contract that is essential for the research community. The scientific research enterprise has evolved such that researchers are meant to benefit from the work of others, in order to advance the overall body of knowledge in a particular field. There may be situations in which, truly, one is only hurting oneself with bad decisions, but this is not one of them.
The sympathic response is just a reflex, as I realize that in so many cases these researchers–academics, scientists, scholars–are only human, and it is so easy to slip into these habits. Most of the training or mentorship that people receive is focused on field specific knowledge (Jonah Lehrer, for example, has an undergraduate degree from Columbia with a major in neuroscience). Students are expected to master large bodies of scientific or technical knowledge, while some of the ethical and/or philosophical principles of scholarship and research is given little emphasis.
Even when someone with a great deal of interest and motivation to learn and understand attempts to grapple with some of these issues, it can become confusing. What is the difference between copyright infringement and plagarism? Can you truly “plagarize” your own work? For me, it’s natural to assume that someone didn’t really “mean” to do something wrong, as I like to think the best of people, especially of those engaged in scientific or scholarly research. But there seems to be an element of defiance in some of this behavior. Given my experience in academia, I repeatedly encounter students who explicitly discount some of the “older” traditions of scholarship. Their notions of how to appropriately “paraphrase and cite” another’s work are often made on the assumption that it doesn’t matter that much. Frequently, they will even defend some of the most egregious examples of plagarism (perhaps, copying entire sections of a Wikipedia article and “changing the words”). Their position appears to be that they are merely playing an intricate game, and that the end results (the appearance of “expertise”) justify their means.
I’m sure that there are multiple levels of self justification and self deception that allow researchers to make this kind of decision. They may think (or rather “feel” since I can’t say they “think” much at all on this subject) that this is not so different from driving 75 mph in a 70 mph zone–as long as no one gets hurt, and you don’t get caught, what is the harm? If you go 85 mph, is it that much worse? Certainly there are all the earmarks of a slippery slope that will lead the unwary or lazy researcher into a spiral of unprofessional and unethical behavior. Likely, each little step seems more akin to a “white lie” and not anything ugly or immoral, like fraud.
Still, it is important for everyone–both as individuals and as a community–to see that the rules and laws that we put into place are good ones, and that everyone understands that it is important to follow them. The rules must truly support the common good, and thus if you follow them it is more beneficial than breaking them. Even if it seems that breaking the rule will be “better” (for your own benefit at least) this isn’t truly the case. This is one of the most basic lessons that all good parents attempt to teach their children–yes, the homework is hard and skipping it “feels” better now, but in the long run, you are better off doing it. Even if you think that it won’t hurt anyone, even if you probably won’t “get caught” it’s still important.
Jonah Lehrer was by all accounts, considered a gifted and insightful writer, but now, each of his prior achievements in life is suspect, so he is certainly paying a price for his own poor judgement. Perhaps, as a journalistic writer, his actions didn’t undermine a great deal of “serious” scholarship or research, but there is still a cost to society. How many people bought his book with sincere interest and respect for his reputation, only to now feel a sense of betrayal? They spent some small amount of their own resources on this, perhaps not a huge loss to most of them, but they were defrauded at least to some extent. For the scientific misconduct described in the infographic, the cost may be unmeasurable. A scientist who falsifies data may be guilty of steering others away from promising directions that could result in tremendous advances, or lead them afield through false leads, wasting valuable resources in vain pursuit of something that was never there. Thus, my tiny impulse to sympathy is short lived.
If someone is truly confused, and not simply feigning this as a rationalization for their choices, then the answer is to be more open about these issues. We should be more explicit in teaching students how to properly conduct themselves in the course of research and publication. This can include more education on how intellectual property rights figure into the equation, but it is important to realize that this is more than an issue of “property rights.” All of us could probably benefit from a bit more self awareness and reflection in their own professional conduct in this respect. I’m sure that many of these people who have admitted “misconduct” might, at one point or another, have written a very similar posting, and felt they would not fall prey to such temptation. Your own “bad habits” might seem small, insignificant but it’s important that you can say you actually made the effort to do the right thing, and not take advantage of the loopholes or rationalizations.
First thing, I’ve meant to be more consistent with posting, but got caught up in launching a small “idea pitch” competition for students. It was a good experience, and fun was had by all of course, but I didn’t see daylight for the past couple of months. However, as my schedule has returned to normal in the past week, I noted increased discussion in technology transfer circles on the subject of “free agency” and the rise of a more concerted “just say no” campaign. Ordinarily, I would steer clear of the subject—for me, it falls into the cateogry of a subject better left alone if there isn’t a good chance to have a calm and reasoned discussion. On the other hand, I’m not ever clear that the subject truly is “free agency” (whatever most people mean by that) or if it gets back to the “Stanford v Roche” questions on ownership of inventions made by university researchers. The two are, of course, linked. But ownership of intellectual property should be a question that can be answered factually.
I won’t make any attempt to revisit Bayh-Dole legislation, or the Stanford v Roche decision, and interpret those in any way–Gerald Barnett is doing this with much more rigor than I would be willing to devote to the effort. I would just say the answer is along the lines of “in a particular case, figure out who owns the intellectual property.” The owner of the intellectual property—whether by right of inventorship, by contract or voluntary assignment, or by any other means of acquiring ownership which you might imagine—is entitled to make decisions on how to manage the rights. Free agency is actually the default option when the inventor(s) own(s) intellectual property. But so what?
In the end, you simply have to ask, what are the real goals of university technology transfer? Intellectual property protection is really meant to help market investment, to develop innovations more quickly and efficiently. The party who makes the product/market development investment does so with some assurance that an enforceable patent will allow for at least a moderate period of uncontested market share—at least in theory. Since universities generally cannot take products directly into the market, and since an inventor rarely has the experience or resources to do so, licensing of patents is the default mechanism for inventions originating from university research. In my experience, the primary reason that faculty researchers are interested in technology transfer is to see that their knowledge is transformed into a real world solution to a problem. They choose to work with any licensing agent (whether internal or external) only if it seems there will be more benefit than bother associated with the activity–that is, when they feel strongly about getting a product developed and on the market. This may. or may not, include an expectation of significant financial rewards as well.
I’m very supportive of the licensing staff at most universities; after all, that is the role I’ve served for over sixteen years. I know all about the constraints and obstacles facing technology transfer efforts. I understand that a lot of the so-called “failure” there is rooted in circumstances that can’t easily be addressed (such as lack of funds to staff the office, or for prosecution of patents). Thus, it isn’t obvious that free agency will solve specific issues and problems with commercialization of university inventions, anymore than salads at fast food places solve the so-called epidemic of obesity. However, as much as we might hate to admit it, there are too many problems with management of intellectual property in a university setting to ignore. There may be lots of arguments against current proposals for so-called “free agency” but I find it difficult to oppose considering the model, at least in some form.
Even if free agency is an option, faculty would need to be motivated by other considerations to take advantage of their “freedom.” Since many of the free agency models are only dimly conceived—Who, exactly, are these “agents”? And how would they engage with faculty?–it would be necessary to think through the consequences of how a particular model was implemented. Many of the free agency proposals seem to assume that the “successful” university systems would be open to managing the patents for others. On that front, I cannot imagine that a university would allow their technology transfer office to work with another university’s intellectual property on a regular basis. Of course, joint ownership is an exception, but even then I’ve known it to work both ways. There are multiple cases where I was perfectly content to allow another university to “take the lead” but was met with resistance from the other side, where my counterpart was hoping my office would manage the intellectual property. It can be a great temptation to have someone else bear the expense, and be the bearer of bad tidings when a decision is made not to file a patent, etc.. Thus, it wouldn’t seem that free agency is always a bad deal for the technology transfer office.
So why not take a serious look at the model and see how it might be implemented to advantage in a particular situation? I doubt that this means a complete shutdown of the technology transfer offices. There are many facets of intellectual property management that the university will still want to deal with directly, and until there are more examples of specific routes for faculty to pursue, the existing licensing office will be the first stop–how else will faculty know where to go? It might not be a very attractive option for faculty when such a system is implemented at first, due to simple inconvenience and lack of other options. Of course, there are a few independent licensing groups, and sometimes attorneys can freelance this sort of work, but these aren’t simple options for the average faculty member to find and evaluate. Even if you work in this arena for many years, it can be difficult to engage with a suitable partner. Further, the terms of such a relationship might be a significant barrier—exactly how much does this option cost faculty?
Honestly though, I think it is worth the experiment. If the benefits of this kind of arrangement do outweigh the “free” option of the local university licensing office, how can you argue with this? I won’t, however, hazard a guess at this stage on how effective the free-agency model will be in practice. After all, if the primary and most obvious “agent” available is still the university’s own technology transfer office, or at least the closest “bigger” university office, it’s not clear this will result in great change. Still, I think there is room for innovation in technology transfer itself, not just in the research results from the university laboratories.
Creative Commons Attribution-Share Alike 2.0 Generic license, photo by Patrick Mackie
O ye’ll tak’ the high road, and Ah’ll tak’ the low (road)
And Ah’ll be in Scotlan’ afore ye
Fir me an’ my true love will ne-er meet again
On the bonnie, bonnie banks o’ Loch Lomon’.
In the course of any particular day, I am often approached with very general questions on topics related to technology transfer. I am expected to serve faculty, staff, and students as a resource for information. In particular, this is meant to ensure that they fully understand the issues and processes, and can more effectively engage in commercialization activities. Unfortunately, this means that the questions posed may range from the legal/technical “what is meant by ‘prior art’?” to the more esoteric issues associated with policy and decision-making, such as “how can we become more successful in spinning out new ventures?”
Of course, the latter sorts are the ones that keep spinning around in my head, since in many cases even I have difficulty with a clear, concise, answer. In large part, this is because I recognize the inherent problems with decision making in these situations. The real question being asked in some of the situations is “what is the RIGHT thing to do?”
At the heart of any decision is an effort being made to choose an action in pursuit of some goal. In order to do so successfully, the decision maker must:
- Clearly articulate the goal
- Identify options available for some active choice
- Understand the consequences of those alternative choices
- Evaluate other factors which might impact the final decision process
In this general sense, many decisions are conceptualized as a “cost-benefit” analysis, with the decision-making process focusing on the relative gain attendant upon a choice relative to the loss or “expense” involved.
Obviously, some decisions can be made using much simpler processes, such as a coin toss. Such decisions are often made by single person, on matters with little impact on either that individual or anyone else (yes, I would like iced tea to drink), and no complications.
Australian Rules Football match at Hyde Park, London, on 8 January 1944. Source: Australian War Memorial
This image is of Australian origin and is now in the public domain because its term of copyright has expired.
Other decisions, however, require extraordinary efforts directed toward fact-finding and analysis, along with multiple meetings of large groups of people who must conform to a formal structure for coming to a decision. This sort of decision is likely to impact larger groups, or have potential consequences that justifies investment of time and energy into making the best choice possible. Furthermore, this may involve individuals or groups with widely divergent opinions on what constitutes the “best” choice. Frequently, the decision is couched in terms of “right” and “wrong” such that there is an unfortunate attribution of possible fault and blame attendant upon the choice.
For decisions related to technology transfer, the process can be extremely complex. Choices may be constrained in various ways that are uncomfortable for those involved. The culture of academia can also play a larger role than is typically appreciated—this may amount to a set of “almost sacred” values held by some of the people involved. Frequently cited values in academia include “academic freedom” and “dissemination of knowledge,” but there are many variations along these lines, attributing a sort of “purity” of thought and intentions to the academic world. If technology transfer decisions are subjected to this sort of “good vs. evil” analysis, there will be individuals on both sides of the question claiming the moral “high ground” as it were. Suddenly, it becomes difficult to decide which position constitutes the “high road” and which the “low road.”
This is further complicated when you realize that perhaps no one knows which road—the “high” or the “low” one—is “better” in some absolute sense. As the Wikipedia article points out, there may be different interpretations brought to the imagery. The low road is sometimes equated with death, the soul of the departed Scotsman returning home, so the traveler on the “high road” may be making a more expeditious choice, but not necessarily a “better” one. There is a sort of moral judgment implied of course, but there remains room for speculation on the relative values demonstrated.
A recently published study [Philosophical Transactions of the Royal Society B: Biological Sciences, Vol. 367, No. 1589. (5 March 2012), pp. 754-762] confirms that the brain actually processes decisions differently if the choices involve “sacred” values held by a particular person.
Economic, foreign and military policies are typically based on utilitarian considerations. More specifically, it is believed that those who challenge a functioning social contract should concede if an adequate trade-off is provided (e.g. sanctions or other incentives). However, when individuals hold some values to be sacred, they fail to make trade-offs, rendering positive or negative incentives ineffective at best.
Obviously, this is true for university policy as well. As the authors in the study point out, policy decisions are seldom made with any degree of introspection on the possible difference in value judgments on the subject in question. The study concludes that there is a problem encountered when attempting to evaluate certain choices when viewed against deeply held convictions–the brain simply doesn’t process this sort of decision well. The entire process takes a different track so to speak.
Given this understanding, it’s easy to see how difficult the choices might be for university technology transfer. In fact, there may be surprisingly little effort made to analyze some of the “defacto” decisions made, under guidance of existing policies. In worst case scenarios, policy is wielded as a weapon (either against a faculty member or a potential licensee!) and the university (technology transfer) agents are derided for being “inflexible.” The university representatives feel it is a matter of taking a stand, holding positions consistent with their understanding of their institutional policy. They are held to this standard for making the decision. No matter how “reasonable” the tech transfer office might wish to be, university administrators can be hostile to suggestions that policies provide for flexibility—after all, what would be the point in having a policy then?
Thus, when I am posed a question that has at its heart, a possible conflict with the deeply held values inherent to the academic world, I tend to hesitate in providing answers. While not “sacred” on par with belief in a deity, there is sometimes an undercurrent of feeling that technology transfer involves something “not right” in the context of the presumed mission of the university. For some researchers, a decision has already been made that commercialization is “good” for the university, and much of the decision-making process defaults to typical “cost-benefit” categories. But there will be faculty for whom this kind of answer is insufficient. In order to work with them, to give real answers to their questions, it is important to realize that technology transfer is perceived as putting a price tag on something that isn’t even on the market. I do try to respect this position and in the course of doing my job, attempt to bring clarity and consensus to the decision-making process in technology transfer. Even when I’m tempted to just toss a coin!
In the social sciences, unintended consequences (sometimes unanticipated consequences or unforeseen consequences) are outcomes that are not the outcomes intended by a purposeful action. The concept has long existed but was named and popularised in the 20th century by American sociologist Robert K. Merton.
Unintended consequences, From Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Unintended_consequences
In popular discourse, people often refer to “the law of unintended consequences” when debating the merits or shortcomings of a particular decision, or course of action. Like any sufficiently interesting and yet complicated subject, it can be difficult to fully grasp what is really at the heart of such references. It recently struck me that this is, in part, at the center of the many debates on the proper role of the university in commercialization of scientific research. The initial inspiration for this post comes from a blog posting by Gerald Barnett (Research Enterprise, Oh, to be the happy dog again–side note, I try to read Gerry’s blog as often as possible and recommend it highly). In my experience the technology transfer office may be trying to accomplish goals that are not clearly defined or, as highlighted by this posting, are actually in conflict with some of the other goals of both the university administration and the faculty researchers.
It is all too easy to get swept up into the rhetoric on how the Bayh-Dole Act allows universities to “benefit” financially by licensing patents arising from federally sponsored research. From that basic premise arises a series of decisions and actions with consequences, both intentional and unintentional. As the Wikipedia article summarizes the concept, unintended consequences can be roughly grouped into three types:
- A positive, unexpected benefit (usually referred to as luck, serendipity or a windfall).
- A negative, unexpected detriment occurring in addition to the desired effect of the policy (e.g., while irrigation schemes provide people with water for agriculture, they can increase waterborne diseases that have devastating health effects, such as schistosomiasis).
- A perverse effect contrary to what was originally intended (when an intended solution makes a problem worse), such as when a policy has a perverse incentive that causes actions opposite to what was intended.
Note, this summary presupposes that not all “unintended consequences” are negative. However, these tend to be the consequences that are eventually cited as unintended—nearly every positive outcome of a particular decision or action has someone claiming it as his or her own particular intention. Unfortunately, many perceive this as a challenge to make “better” choices, and so to avoid the negative consequences.
Thus, the technology transfer offices confidently point to “success stories” from the cannon of technology transfer gospel as a model for their particular University to embrace—whether that is actually a viable alternative or not. Various University officials or administrators then look to the tech transfer “operation” as a source of alternative income, one that is desperately needed, and begin to expect ever-improving “metrics” in terms of licensing performance. If your office realized licensing income of $10M last year, what are the projections for the following year? What is the projection for next year, and the years after that? Why was there a drop of $2M this year versus the prior year?
If the technology transfer office produces alternative metrics—numbers of licenses, startups founded, patent applications filed, or issued patents—they are likewise put on a track to reproduce or improve those metrics year after year. Often, these become a level of baseline performance for a university versus the performance of “peer institutions” or “aspirational” peers. If your office can’t easily produce the metrics (and some of these are “easy” to produce, such as number of invention disclosures) what then? This can lead to an implied commitment to invest in the metrics—it’s important to remember that these decisions and actions are done at some cost. This might include an annual patent budget, aimed at filing a respectable number of patent applications each year. After all, so the argument goes, you can’t expect home runs if you don’t get a nice number of “at bats” or base hits. If you can produce enough cash, then you can produce patent applications, even issued patents. Funding a technology transfer office with a director, along with some support staff and maybe a couple of technology licensing managers, can represent a sizeable commitment of “overhead” funding.
This is when the tail might begin to wag the dog, and you learn, as Gerry points out, this doesn’t mean a happy dog. A lot of investment and effort is being put into producing a pot of gold at the end of the research rainbow, which means dealing with troublesome leprechauns and associated tricky business. Meanwhile everyone is still expecting those “smiles and fluffiness and public purpose, stardust and unicorns and glitter“ as it is nicely summed up in the original blog posting. While I’ve pointed out the limitations of analogies in a earlier posting (here) this does get a couple of points across! You’ve got to remember, not every fairy tale has a happy ending, and there is always at least one character that is on the losing side. This means someone gets stuck with the role of evil stepmother or nasty fire-breathing dragon.
It’s easy to keep with the script, stick with the stock characters and plots, rather than trying to put together a unique story. But this gets us back to that “law” of unintended consequences. For all practical purposes, it’s impossible that positive consequences will be presented as “unintentional” and so there are no orphans in that part of the fairy tale. As for the rest, you might get some grudging acknowledgement of partial responsibility for negative twists in the story, but mostly you get rationalizations from the parties involved. I like to think that we can work out some new plots for technology transfer tales, and maybe even endings with a few happy dogs. You may still have a lot of those unintended consequences of course, but hopefully the “intended” consequences will make those worthwhile.