I often begin a blog post with a “true confession” type of statement. Usually, I’ve read an article that made me think, sometimes the article is another blog (I only read a couple of blogs regularly, and I do so because they are the ones that make me think most!). I’m normally content to allow others to express insights and thoughtful commentary on a particular topic, and save myself the time and trouble of posting anything at all. However, I’ve found that reading another person’s blog is more often the “inspiration” for a posting of my own, and then I hesitate to post my own blog. I don’t really want to be derivative, thus, I post less often. So, I’ve managed to begin a blog, and then pretty much drop it for months.
Yet I find myself coming back, mostly because I feel there are still a lot of topics where there is an ongoing struggle to come to terms with some decision-making, especially related to academic research, intellectual property protection, etc. In the past weeks, I’ve done a bit of cursory reading on several topics that are part of some student team projects I help manage, under the umbrella programmatic theme of “Innovation to Enterprise.” The teams are each assigned a project from a faculty research group, and they are asked to explore the commercial potential of these projects. Most of the students are entirely new to the concept of “technology transfer” and have no clear understanding of new ventures or entrepreneurial activity. The students find it challenging to begin their projects, and they are frustrated when they realize that, more often than not, the results of the research are so tentative or nebulous that they can’t draw proper conclusions.
I once dealt with a company who expressed a similar frustration. They were working with researchers under an agreement that provided them with rights to review certain “Invention Disclosures” and gave them a set time for making a “decision” on whether (or not) they wish to license the “intellectual property” associated with the disclosure. At one point, I dutifully called for their response on a particular disclosure, as I was expected to make a recommendation for next steps based on their decision.
The CEO was a bit shrill when he replied, more or less, that there “isn’t enough information” to decide.
His response wasn’t unreasonable on the face of the matter–the disclosure probably DID NOT include enough information to make a “decision.” On the other hand, I was accustomed to making similar decisions with much less information. Perhaps I was less than sympathetic in this instance, since some company contracts lawyer had inserted these decision deadlines as a way to keep both the tech transfer office and the academic researcher on some kind of “leash” as it were. There were too many points at which the research path and the commercialization strategy diverged. The company wanted a nice comfortable “oversight” role in the research without being directly involved, at least until they could make THEIR decision.
The scientists were too busy making their own decisions–about publications, conference presentations, grant proposals–to bother with the “tech transfer stuff” as part of the mix. To complete the muddle, the structure of the tech transfer office, and the IP policy, greatly restricted the ability of the researchers to make their own decisions on that aspect of their projects. They didn’t spend much time thinking about intellectual property since it was explicitly not considered part of their “job” in any practical sense. So for the scientists, lobbing off a few half-baked invention disclosures might be their entire contribution to fulfilling the contractual obligations involved. Some researchers might unilaterally decide that commercialization isn’t a serious option, or at least not one they wish to pursue. Thus, they will go about their work without making an effort to consider intellectual property except within the context of research and publishing–plagiarism, that sort of thing.
It is, ultimately, the decision of the researchers to start the commercialization process. Even without the terribly drafted contract and commercial “partner” I wouldn’t have expected the research group to be super diligent about getting a properly drafted invention disclosure into my hands. When a researcher does make a decision to pursue commercialization, the existing academic infrastructure does not support that direction. How could you possibly justify putting time and effort into that doesn’t earn you the “rewards” of academia? Not always the case granted, as some researchers do manage to come to terms with the awkward system, and are interested and/or motivated to move forward with commercial development. Fortunately for my student teams, the projects they are working on fall into that latter category. Unfortunately, it doesn’t make the process any less opaque.
It’s like sporting event where all the researchers are players who start by playing one game, and at some mysterious signal, shift to playing another game with different rules. More confusing, a whole new set of players arrives, in the form of the technology transfer office, the patent attorneys, sometimes even corporate partners and investors. So the challenge is finding a way to make decisions that are consistent within the context of the situation, recognizing that many of the parties are still learning how to play a game with some obscure rules.
The researchers may be quite willing to work with a tech transfer office, or anyone frankly, who is willing to step in to make decisions necessary for the business development effort. That doesn’t mean they have made enough effort to understand the process so that their own work is directed toward that end. In general, academic research isn’t directed toward the same ends as business development, and even if it were, business development isn’t that easy either. If it were, everyone would step up and start their own company, develop wonderful products and reap the rewards. It’s obvious that even the most motivated entrepreneurs can enter the market with a new product to be met with indifference on the part of their desired customers.
My goal, as it has been since I took my first job in “technology transfer” 20 years ago, is to make sure that the decisions made by the people involved are the best ones they can make, with the information available at any point. This may result in decisions that aren’t consistent with certain institutional metrics for commercial enterprise–numbers of disclosures, stories of successful startup ventures–but hopefully those decisions are ones that best serve the goals of the research enterprise, which is, in the end, the goose that is supposedly laying golden eggs. (But I begin to mix my metaphors, or something like that, so will close for now).
Fat startup: Learn the lessons of my failed Lean Startup — Word Sting.by John Finneran, CFA principal of Word Sting
I wanted to comment on the post linked above, because 1) I am working a bit with the “lean startup” model with student project groups, and 2) I am always a bit skeptical of “flavor of the day” business jargon. I found it interesting that this was posted online shortly after the Harvard Business Review published “Why the Lean Start-Up Changes Everything” by Steve Blank. This release, in the May 2013 HBR, heralds the alleged “entry” of the lean startup concept into “mainstream” business concepts. While the two posts aren’t directly relevant, John Finneran’s brief description of his own experience in attempting to “implement” lean startup seems instructive at one level–even if “lean” is the right way to go, it might not be easy to do it “right.” It isn’t clear how much of the story might be due to incorrect implementation of the concepts, or whether the concepts are not broadly applicable to all situations. In my own brief experience, I hesitated to fully embrace the full lean startup methodology due to my own concerns along those very lines.
On the other hand, I chose to use the concepts in exactly the same context that Steve Blank implemented them–working with university students who are engaged with “first time” startup efforts. These students are engaged in learning activities, and not primarily in the startup activity (regardless of the rhetoric around the subject). I found the business model canvas and the very slick exciting MOOC of Lean Launchpad to be useful tools for students with little or no experience in business research. In doing so, I hoped to leverage the fame and popularity of “startup party” mentality, hoping this would be motivating to students. I also found this to be an efficient direction, since I don’t have a burning desire to set up my own “school” of thought on the subject. However, I am nothing if not practical, and for me, this model is just one way to have a standard methodology for asking the right questions.
In our student team projects (exploring real-world application of ideas to develop new products or new businesses) there are fewer of the downsides expressed in the “Fat Startup” article. The “potential clients” who are contacted understand that the “startup” is really part of a class project. The students, no matter how serious they are about moving forward with the new venture, still expect a ‘grade’ for their final efforts. For many of the projects, the effort is being made too early for real “customer” interactions. These students are doing research, and there is no doubt about it. In fact, my expectation is that the majority of the projects will result in “failed” experiments. I try to prepare the students for this by telling them so–but I also tell them they will learn very valuable and practical lessons that they can use when they later find the right path to a startup (or development of a new product).
These projects offer a form of “real world” experience, but maybe not so “real” as that. The reality is in the tasks, but not in the context. The so-called enterprise may be little more than a group composed of college friends and roommates. They generally fit this activity into their overcrowded calendars in the same way they might do for fraternity or sorority functions (whether service or social). I can suggest that they use the Lean Startup methodology for their projects since I find it has some solid elements of logic, but I’m not concerned they will rely on this effort to support themselves (or any dependents). And they aren’t in a position to make promises to customers, just take suggestions and see if they can find a path that looks promising…but they really are planning a future journey. Their project might involve “getting to the station” as their first “step” and they still have the option to take another train (or even consider an entirely different mode of travel, maybe a boat).
Don’t get me wrong–I wouldn’t suggest the model to my students if I didn’t find it valuable. I just want them to use their own judgement, and learn how to THINK before they make decisions. This includes the decision on whether, or not, to rely on something like the “lean startup” concept for taking that leap into the deep (and muddy) waters of entrepreneurship. University faculty are in much the same position as students, and they are being given direction to use the lean startup model in their own new ventures (for example, NSF Innovation Corps (I-Corps)). I feel this might be useful with some faculty, those who can truly put themselves back into “student” mode for the purposes of learning a subject (business or entrepreneurship) of which they can comfortable admit ignorance. It might not be applicable to all new ventures or to all faculty entrepreneurs.
The moral of the story? Just be careful about jumping on passing bandwagons.
Both of these short “editorial” pieces ask an interesting and somewhat philosophical question about the role of the university in student “entrepreneurship.” The linked articles (by Nicholas Thompson, editor at Newyorker. com and Stanford alum) question some of the more “intense” levels of involvement that many universities are embracing, with schools like Stanford as our role models. Now, I’ve always tried to resist using “what Stanford does” as a yardstick–I even included a nice little slide in one of my presentations, showing little lemmings jumping off a cliff with a cartoon balloon from one shouting as it fell “…well, Stanford does it!” But the urge to follow their lead is almost irresistible.
At its heart, this is nothing more than a bit of “peer pressure” since every university is looking to establish a reputation–presumably a “good one” too–showing how their own activity reflects the “best” new practices. In some cases, it may be near impossible to replicate these practices (we’re no Silicon Valley) so at least that provides some protection from following all the yellow brick roads “we” see. It’s easy though, to wish your own institutions of higher education could collect all the accolades that a school like Stanford receives.
After some reflection, I realized that the piece reminds me a bit of the book I’m reading” (audiobook format as usual these days, so “listening to” is more precise) “Antifragile” by Nassim Nicholas Taleb (click here for his website and here for the Amazon link to the book–note I am not an Amazon affiliate, just being helpful). Now, that is a bit of a leap, but I tend to think that way, so it’s not surprising if you know me very well at all!
In the book, which I confess to not having finished as yet, Nassim Taleb refers to a phenomenon he calls “lecturing birds on how to fly” and makes some very disparaging references to Havard Business school faculty (not Stanford faculty, but I’m betting he would do so if he wasn’t more centered on the east coast). In many ways, his argument is also along the lines of a sort of “chicken and egg” situation (even though chickens don’t fly). As I see it, a school like Stanford provides a lot of opportunities that “attract” very entrepreneurial students, very smart ones at that, and then the “university” manages to get those little entrepreneurial birds to fly. Many other universities then follow Stanford’s lead on how to teach their own students the Stanford way of entrepreneurship, since it has such great success.
If you look at it that way, it seems that some of the things critiqued may be simply circumstantial, and Stanford isn’t really “doing” anything except meeting some expectations of the students already there. Is this “harmful” to Stanford in some way, as Mr. Thompson is pointing out? Perhaps, but it may be more true to say that Stanford is being given credit for ‘causal’ influence that isn’t there. In some cases, maybe there is also a negative impact on students (or more precisely, on their education). Certainly, I think it’s a good idea to ask the questions…but is someone going to try to find an answer? Regardless, it serves as one more example for me to use when people try to propose the next “me too” model for my own programs or projects.
This is hard to resist–who among us doesn’t feel the need for validation that takes the form “well, I found out that X is doing things this way”? As happened regularly, I was once contacted by a colleague at another university (no names), who was engaged in a contract negotiation with a company. The company had cited my own university as an example of a peer who had “agreed to these terms” so she called me to confirm. One of the reasons she did so was that the terms seemed to be “less desirable” and she seemed to suspect the company was not telling the entire truth. Now, at this point, I didn’t want to get into exactly what my organization had (or had not) agreed to, since there were too many complicating issues to explain. But I did give her one piece of advice that seemed to help. I told her, “Just go back with–even if “University X” agreed to those terms, that’s not a good reason for us to agree.” As I explained, there might BE good reasons for her to accept those terms, but the company needed to work through those specific reasons with her–not just “by reference” insist that since another university agreed, it was OK.
Otherwise, I am continuing to “read” the Taleb book, and I hope that there are some more interesting issues raised. As I understand, Nassim Taleb is nothing if not interesting, and I’m looking forward to learning more about some of his insights on innovation.
This article argues that perhaps “we” (as in, the administrations of universities, and the government) are trying too hard to push research commercialization.The argument is primarily against special spending for programs and financial incentives to faculty encouraging them to pursue technology transfer. It is an interesting take on the question of university technology transfer…and I admit, I’m more than a little sympathetic to the views expressed. Which is unfortunate for me, since there are some who feel that my current role within the university is to find ways to “increase” commercialization levels on our campus.
Granted, I don’t consider this to be the essential role I play in this regard, but others would consider it a “failure” on my part if some metrics related to commercialization don’t “increase.” I tend to take a more balanced approach. I don’t want to “increase” commercialization, per se. I am working to reduce or remove roadblocks to commercialization when there is a good opportunity to take some research findings to market. There may be unnecessary difficulties–generally caused by either lack of experience on the part of faculty or administrative or financial issues that limit the perceived upside to pursuit of commercialization. The article does a good job outlining the failure of “additional” incentives, and points out that the natural incentives exist for those cases where there is a big payoff–and as I often point out to faculty, those tend to be commercialized in spite of so-called barriers to the process. Most faculty aren’t overly influenced by the money argument anyway, although it falls under the category of “great if it happens.”
I’m not sure the article does justice to another aspect of the problem however. There are instances where there is a clear benefit to taking the research outcomes and making them available in the “real world” but there is not a clear “profit motive” to do so. Faculty may be very interested in seeing the results of their work applied and in being part of the effort to make this happen. They are doing research ostensibly to help “solve” problems in many cases. Often, this leads to a lot of confusion as to how the university might proceed. There a quite a few options to consider, such as:
- Publish the results and make them “freely available.”
- Pursue some kind of “open innovation” strategy that includes a level of intellectual property protection (such as open source licenses for software).
- Try to pursue commercialization through entrepreneurship, which may not include licensing of intellectual property.
- Pretty much do nothing but sometimes think about it, etc.
All of the options while somewhat straightforward can lead to paralysis by analysis and other common forms of deadlock. Especially if the people involved (the research team) are inexperienced and unfamiliar with the process. The researchers have a goal that is consistent with technology transfer and licensing, but without all of the pieces coming together properly.
For example, their response may be to publish, but this winds up being unsatisfactory because there is no clear transfer of an idea into use. If there is a decision to hold off publishing, it may be that there is a naive belief that they will be able to enter into a commercialization relationship, and if they can’t it wouldn’t be fair if “someone else” got to make money off of the “idea” from the university. Conversely, potential partners may be confused as well, if there is not a clear IP position that can be evaluated so they can make a decision to “license or not license.” There may be some strategies that the faculty can pursue in these cases, but as the article in Business Week points out, adding incentives doesn’t solve them.
The article further references an earlier publication, by Richard Jensen of Notre Dame and Marie and Jerry Thursby of Georgia Tech titled “Disclosure and licensing of university inventions: ‘The best we can do with the s**t we get to work with.’” (click here for this paper). The Businessweek article notes the following (emphasis added):
The title, taken from a comment made by one of the licensing officers, sums up what happens when you give universities an incentive to commercialize additional faculty inventions but you can’t do anything to improve the quality of the inventions themselves.
Now, you can perhaps characterize those situations that I refer to above in this way–the “inventions” are not of proper “quality” to license. As a licensing manager, I can help faculty work through the problems and see if there really is a diamond in the rough, but we can’t make a diamond out of a piece of coal. But notice, a piece of coal can still be worth something. And it can still be important to someone, even if there is no clear-cut way to “commercialization” for that.
There are incentives for most licensing offices to work ONLY with faculty who have the “quality inventions” and no incentives for those same licensing officers to find a way to “work with” the so-call “s**t” as it were (note, I don’t approve of a licensing office using that attitude when referring to an invention disclosure–although I can feel a bit of empathy). So perhaps there is some room for universities to take some steps to encourage efforts at commercialization, and some of this may be in the form of incentives (although perhaps changes to incentives for licensing staff, not the faculty).
The moral of the story is that incentives aren’t necessarily the best answer in all situations, and as they say…be careful what you ask for because you just might get it.
Infographic Created by: ClinicalPsychology.net
This posting isn’t particularly related to technology transfer, and not even strictly speaking to intellectual property. But I found this infographic to be quite interesting, and it struck a nerve with me, especially due to the recent news coverage of Jonah Lehrer‘s unfortunate decisions in writing his latest book (for reference and to read more about the story click here). There is a common theme to both that leaves me troubled and sympathetic at the same time.
The troubling aspect is that there are certain individuals who apparently think some “rules” apply to other people, but who are more flexible when applying these to their own situation. This is true even for some our society would consider the most well educated and intelligent. Perhaps they feel their own intelligence is a license to make a “judgment call” about whether, or not, a rule is a good one. Maybe they even convince themselves that they are correct in the “spirit of the law” as it were, and that others don’t comprehend all of the nuances of their “particular” situation. They may claim to see no rational justification for a particular rule.
Nonetheless, the rules, or the laws, exist for a reason.
Even IF you can make an argument that a particular rule or law is inherently flawed, that doesn’t justify making a decision to break it unless you also acknowledge this is what you are doing. I have no problem with someone taking a stand against an unjust law or an unreasonable and arbitrary rule. But when the rules or principles of good scholarship are violated by a researcher as described in the infographic, it undermines the foundation on which other researchers are attempting to build new knowledge. Researchers who do this are intentionally making choices to enhance their own professional status, to add achievements or successes to their reputation that are fraudulant. In this situation, researchers are allowing self promotion to trump the social contract that is essential for the research community. The scientific research enterprise has evolved such that researchers are meant to benefit from the work of others, in order to advance the overall body of knowledge in a particular field. There may be situations in which, truly, one is only hurting oneself with bad decisions, but this is not one of them.
The sympathic response is just a reflex, as I realize that in so many cases these researchers–academics, scientists, scholars–are only human, and it is so easy to slip into these habits. Most of the training or mentorship that people receive is focused on field specific knowledge (Jonah Lehrer, for example, has an undergraduate degree from Columbia with a major in neuroscience). Students are expected to master large bodies of scientific or technical knowledge, while some of the ethical and/or philosophical principles of scholarship and research is given little emphasis.
Even when someone with a great deal of interest and motivation to learn and understand attempts to grapple with some of these issues, it can become confusing. What is the difference between copyright infringement and plagarism? Can you truly “plagarize” your own work? For me, it’s natural to assume that someone didn’t really “mean” to do something wrong, as I like to think the best of people, especially of those engaged in scientific or scholarly research. But there seems to be an element of defiance in some of this behavior. Given my experience in academia, I repeatedly encounter students who explicitly discount some of the “older” traditions of scholarship. Their notions of how to appropriately “paraphrase and cite” another’s work are often made on the assumption that it doesn’t matter that much. Frequently, they will even defend some of the most egregious examples of plagarism (perhaps, copying entire sections of a Wikipedia article and “changing the words”). Their position appears to be that they are merely playing an intricate game, and that the end results (the appearance of “expertise”) justify their means.
I’m sure that there are multiple levels of self justification and self deception that allow researchers to make this kind of decision. They may think (or rather “feel” since I can’t say they “think” much at all on this subject) that this is not so different from driving 75 mph in a 70 mph zone–as long as no one gets hurt, and you don’t get caught, what is the harm? If you go 85 mph, is it that much worse? Certainly there are all the earmarks of a slippery slope that will lead the unwary or lazy researcher into a spiral of unprofessional and unethical behavior. Likely, each little step seems more akin to a “white lie” and not anything ugly or immoral, like fraud.
Still, it is important for everyone–both as individuals and as a community–to see that the rules and laws that we put into place are good ones, and that everyone understands that it is important to follow them. The rules must truly support the common good, and thus if you follow them it is more beneficial than breaking them. Even if it seems that breaking the rule will be “better” (for your own benefit at least) this isn’t truly the case. This is one of the most basic lessons that all good parents attempt to teach their children–yes, the homework is hard and skipping it “feels” better now, but in the long run, you are better off doing it. Even if you think that it won’t hurt anyone, even if you probably won’t “get caught” it’s still important.
Jonah Lehrer was by all accounts, considered a gifted and insightful writer, but now, each of his prior achievements in life is suspect, so he is certainly paying a price for his own poor judgement. Perhaps, as a journalistic writer, his actions didn’t undermine a great deal of “serious” scholarship or research, but there is still a cost to society. How many people bought his book with sincere interest and respect for his reputation, only to now feel a sense of betrayal? They spent some small amount of their own resources on this, perhaps not a huge loss to most of them, but they were defrauded at least to some extent. For the scientific misconduct described in the infographic, the cost may be unmeasurable. A scientist who falsifies data may be guilty of steering others away from promising directions that could result in tremendous advances, or lead them afield through false leads, wasting valuable resources in vain pursuit of something that was never there. Thus, my tiny impulse to sympathy is short lived.
If someone is truly confused, and not simply feigning this as a rationalization for their choices, then the answer is to be more open about these issues. We should be more explicit in teaching students how to properly conduct themselves in the course of research and publication. This can include more education on how intellectual property rights figure into the equation, but it is important to realize that this is more than an issue of “property rights.” All of us could probably benefit from a bit more self awareness and reflection in their own professional conduct in this respect. I’m sure that many of these people who have admitted “misconduct” might, at one point or another, have written a very similar posting, and felt they would not fall prey to such temptation. Your own “bad habits” might seem small, insignificant but it’s important that you can say you actually made the effort to do the right thing, and not take advantage of the loopholes or rationalizations.
In the social sciences, unintended consequences (sometimes unanticipated consequences or unforeseen consequences) are outcomes that are not the outcomes intended by a purposeful action. The concept has long existed but was named and popularised in the 20th century by American sociologist Robert K. Merton.
Unintended consequences, From Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Unintended_consequences
In popular discourse, people often refer to “the law of unintended consequences” when debating the merits or shortcomings of a particular decision, or course of action. Like any sufficiently interesting and yet complicated subject, it can be difficult to fully grasp what is really at the heart of such references. It recently struck me that this is, in part, at the center of the many debates on the proper role of the university in commercialization of scientific research. The initial inspiration for this post comes from a blog posting by Gerald Barnett (Research Enterprise, Oh, to be the happy dog again–side note, I try to read Gerry’s blog as often as possible and recommend it highly). In my experience the technology transfer office may be trying to accomplish goals that are not clearly defined or, as highlighted by this posting, are actually in conflict with some of the other goals of both the university administration and the faculty researchers.
It is all too easy to get swept up into the rhetoric on how the Bayh-Dole Act allows universities to “benefit” financially by licensing patents arising from federally sponsored research. From that basic premise arises a series of decisions and actions with consequences, both intentional and unintentional. As the Wikipedia article summarizes the concept, unintended consequences can be roughly grouped into three types:
- A positive, unexpected benefit (usually referred to as luck, serendipity or a windfall).
- A negative, unexpected detriment occurring in addition to the desired effect of the policy (e.g., while irrigation schemes provide people with water for agriculture, they can increase waterborne diseases that have devastating health effects, such as schistosomiasis).
- A perverse effect contrary to what was originally intended (when an intended solution makes a problem worse), such as when a policy has a perverse incentive that causes actions opposite to what was intended.
Note, this summary presupposes that not all “unintended consequences” are negative. However, these tend to be the consequences that are eventually cited as unintended—nearly every positive outcome of a particular decision or action has someone claiming it as his or her own particular intention. Unfortunately, many perceive this as a challenge to make “better” choices, and so to avoid the negative consequences.
Thus, the technology transfer offices confidently point to “success stories” from the cannon of technology transfer gospel as a model for their particular University to embrace—whether that is actually a viable alternative or not. Various University officials or administrators then look to the tech transfer “operation” as a source of alternative income, one that is desperately needed, and begin to expect ever-improving “metrics” in terms of licensing performance. If your office realized licensing income of $10M last year, what are the projections for the following year? What is the projection for next year, and the years after that? Why was there a drop of $2M this year versus the prior year?
If the technology transfer office produces alternative metrics—numbers of licenses, startups founded, patent applications filed, or issued patents—they are likewise put on a track to reproduce or improve those metrics year after year. Often, these become a level of baseline performance for a university versus the performance of “peer institutions” or “aspirational” peers. If your office can’t easily produce the metrics (and some of these are “easy” to produce, such as number of invention disclosures) what then? This can lead to an implied commitment to invest in the metrics—it’s important to remember that these decisions and actions are done at some cost. This might include an annual patent budget, aimed at filing a respectable number of patent applications each year. After all, so the argument goes, you can’t expect home runs if you don’t get a nice number of “at bats” or base hits. If you can produce enough cash, then you can produce patent applications, even issued patents. Funding a technology transfer office with a director, along with some support staff and maybe a couple of technology licensing managers, can represent a sizeable commitment of “overhead” funding.
This is when the tail might begin to wag the dog, and you learn, as Gerry points out, this doesn’t mean a happy dog. A lot of investment and effort is being put into producing a pot of gold at the end of the research rainbow, which means dealing with troublesome leprechauns and associated tricky business. Meanwhile everyone is still expecting those “smiles and fluffiness and public purpose, stardust and unicorns and glitter“ as it is nicely summed up in the original blog posting. While I’ve pointed out the limitations of analogies in a earlier posting (here) this does get a couple of points across! You’ve got to remember, not every fairy tale has a happy ending, and there is always at least one character that is on the losing side. This means someone gets stuck with the role of evil stepmother or nasty fire-breathing dragon.
It’s easy to keep with the script, stick with the stock characters and plots, rather than trying to put together a unique story. But this gets us back to that “law” of unintended consequences. For all practical purposes, it’s impossible that positive consequences will be presented as “unintentional” and so there are no orphans in that part of the fairy tale. As for the rest, you might get some grudging acknowledgement of partial responsibility for negative twists in the story, but mostly you get rationalizations from the parties involved. I like to think that we can work out some new plots for technology transfer tales, and maybe even endings with a few happy dogs. You may still have a lot of those unintended consequences of course, but hopefully the “intended” consequences will make those worthwhile.