Reflections on FAIL

On Friday we had the great opportunity of reflecting on the opportunities for failure that our projects might face.  Everyone in the class took time to brainstorm ways in which we would predict that each others’ projects might succumb.

Reflections on reflections on failure

There is a rich history of failed projects for social good, and a great deal of gnashing around how useful it is to pay a lot of attention to failure.  (The #FailFare conference, started by MobileActive.org, comes down firmly on the side of amplifying failure).

The tricky aspect of all reflections on failure (or attempts to learn from success) is figuring out what the important variables are that distinguish failed projects from successes.  There are so many ways something can fail – everything from design, timing, poor community connections, or infinitely many other dimensions – it’s difficult to say with rigor which things did or didn’t lead to a particular project’s failure.  This difficulty is doubly compounded when it’s prospective: for a project that hasn’t failed yet, which are the things that could do it in?  Where are the actuarial tables for design projects?  Can we get more rigorous?

Benjamin Mako Hill has tried to get more rigorous, at least from the retrospective perspective.  He did a fascinating presentation at the Berkman Center last year (well worth an hour to watch it) on eight “almost wikipedias”, in which he asks the question: why did Wikipedia succeed, in ways that the other collaborative encyclopedias that were started at roughly the same time as Wikipedia did not? Through qualitative and quantitative analysis, Hill identifies four principles that he believes led to Wikipedia’s relative success:

P1: Contributors struggled with goals that diverged even slightly from tradition.

When an encyclopedia wasn’t a plain-old Brittanica-style effort, people had trouble figuring out exactly what to do with it.  The project needed to fit within existing conceptual frames.

P2: Substantive focus on content, not technology.

Whereas some other projects focused on building the best tool, Wikipedia took pre-existing wiki software, and focused on writing articles.

P3: [The wiki model] lowered transaction costs.

Some early encyclopedias required editing HTML, which was too difficult; or requiring confirmation as an approved editor, which was too high a barrier.

P4: Wikipedia succeeded because it “hid” authorship from editors.

To the extent that these principles are explanatory for the relative success of Wikipedia, the next obvious question is: how repeatable is this?  Do the principles say anything about other design efforts – could they be predictive?  For an early encyclopedia project this would be tricky: the delicate balance between a focus on content over technology versus a tech-enabled reduction in transaction costs is one of the major difficulties in establishing a design.  And who could have predicted the aggregate effects of hiding authorship?  We could easily imagine an alternate universe in which P4 read “preserve authorship by rewarding participation with attribution, to motivate participation.”

Consensus Project: possible failures

With that in mind, here are the results of the  5 minute brainstorm on post-its for the consensus project.  We grouped the results roughly into the following categories:

  • Digital Divide
  • Scope
  • Framing
  • Codesign
  • Tech
  • General

To some extent, these possible failures are highly consonant with the general palette of design anxieties that I already have going into a project like this.  As is to be expected in a 5-minute brainstorm, some of the points raised aren’t too useful (some “General” failures amount to little more than “it doesn’t work”; some of the “Tech” failures amount to “you wrote shitty code”).

Getting beyond the “you didn’t do quality work” level fails, I think the the 5 themes that seemed most worth digging in are: digital divide, scope, framing, prior art, and codesign.  To address this, we’ve taken each category and the spread of possible failures in them, and tried to pull out a few actionable principles to consider in design which, if fulfilled properly, might guard against those failures.  We can come back to these principles periodically during the design process in order to check whether we might be headed down a dangerous path.  This by no means assures success; but it at least allows us to check whether designs end up matching our prior intuitions.

Digital Divide

The digital divide question for our project is one of the more central to its design.  At its root, our goal is to improve the way groups can use consensus processes.  But if a group adopts a tool which structurally excludes some members of the group, this is a huge fail.  The other side of the coin is that digital technology could enable more participation – limiting consensus process to in-person meetings can structurally exclude people who don’t have the time or money to travel to a meeting.

The more insidious difficulty is if a tool is unevenly adopted: a tech-savvy subset of a community begins to use it heavily, and thus structurally excludes just one set of participants.

Some principles to address these concerns:

  • Using the tools, participation should increase or remain constant.
  • Levels of participation should be visible to users of the tools. Participants should notice if some members of their community are not using it.
  • Control of the tools should be accessible to all participants, regardless of their degree of technical sophistication.

Scope

Scope questions are hard.  Is it better to build something that is highly suited to a particular purpose, or something that is more general?  Software developers are frequently warned against “premature generalization”; at the same time, inadequate generalization leads to lower utility and adaptability.

  • Don’t be an island: The tool should integrate with existing systems.
  • Don’t build facebook: The tool should not attempt to replace any existing functionality, without a very good reason for doing so.
  • Don’t be everything to everyone.  If the needs of different potential users are too different to reconcile, pick one.  Do one thing well rather than many things poorly.

Framing

The framing failures are related to scope failures, but focus more on the problem statement than the proposed solution.  The existence of such a large number of tools for online democratic decision making – most of which have failed to gain traction and user acceptance – certainly gives me pause: Is this actually something there is a need for?  To this, I think the best we can do is to pull a principle from Mako’s analysis of the eight almost Wikipedias:

  • Ground the design in existing practice.  Preference what people actually do (and how they think about what they actually do) over what we might imagine is better.

For consensus design, this means looking at how people actually use consensus – including all the emotional and non-verbal communication, group understanding, tense arguments, structured facilitation, and education.  We can’t just pick some component out like a voting mechanic and expect it to function as the whole.

Prior art

While this category only captured a couple of notes, I felt that it was worth highlighting carefully.  In addition to the existing tools for democratic deliberation, there are already very sophisticated tools for general online communication and collaboration (etherpads, google docs, mailing lists, facebook, twitter, etc).  Are we building something worthwhile?

The principles we might come up with here are fairly similar to the scope principles.  I’ll reiterate with this one:

  • Develop a theory for why something failed, and address the reasons, before building something similar.

Codesign

In some sense, the failures listed here are similar to some of the Tech failures that amount to “don’t do crappy work”, but there are some more specific points worth considering.  The biggest is that this project lacks a singular, specific community partner for its duration; rather, there are a larger number of more diffuse participants with less individual involvement.  I think this principle captures the worry:

  • Don’t build anything without a clear target user, who is participating in the design.

Conclusion

So in the end, we have 9 actionable principles that we might consider as “ways to avoid failure”, or at least, ways we imagine right now that we might avoid the failures we can imagine right now.

  • Using the tools, participation should increase or remain constant.
  • Levels of participation should be visible to users of the tools. Participants should notice if some members of their community are not using it.
  • Control of the tools should be accessible to all participants, regardless of their degree of technical sophistication.
  • Don’t be an island: The tool should integrate with existing systems.
  • Don’t build facebook: The tool should not attempt to replace any existing functionality, without a very good reason for doing so.
  • Don’t be everything to everyone.  If the needs of different potential users are too different to reconcile, pick one.  Do one thing well rather than many things poorly.
  • Ground the design in existing practice.  Preference what people actually do (and how they think about what they actually do) over what we might imagine is better.
  • Develop a theory for why something failed, and address the reasons, before building something similar.
  • Don’t build anything without a clear target user, who is participating in the design.

I think a next step for us might be vetting these principles with our community partners, and modifying them or crafting new ones accordingly.