How to do Effective Online Public Engagement when you need to Ask

This selection comes from the upcoming book, Online Public Engagement, due out in 2016 from Routledge Press.  This section uses the framework for understanding different types of public engagement that I laid out in an earlier chapter, summed up as Tell, Ask, Discuss, Decide, and talks about how to do effective Telling-style public engagement in an online context, such as sharing background information or proposed alternatives that are being considered.

You can learn more about the Wise Economy Workshop’s strategy for doing more effective public engagement — whether online or in real life — in Crowdsourcing Wisdom: a guide to doing public meetings that actually make your community better (and won’t make people wish they hadn’t come).  

 

As we discussed in Chapter 2, Asking activities shift the direction of participation — we move from the agency as the sole speaker in Telling, to the public as largely the sole speaker in Asking.  Asking participation usually takes the form of what we call “feedback” activities — this includes a variety of surveying methods, which can range from conventional written surveys to feedback on photographs, road cross-sections, what-if scenarios, and others.  Almost every known online public engagement app or platform includes at least one method of Asking, and typically several.

Asking in-person public engagement methods typically involve a wider variety of methods than Telling presentations, and online Asking strategies tend to closely replicate print or in-person methods.  A brief selection of online Asking strategies available at this time include

  • Opinion surveys
  • Visual Preference surveys
  • Scenario or what-if surveys

and others.

Most commercial sites and platforms enable some variation of the online survey, either independently or through integration with a dedicated site such as SurveyMonkey.  In many cases, a second form of open-ended survey forms the first step in the process called Ideation; this is discussed in the next session.

As most readers have probably learned, effective survey-writing is a science unto itself, and the difference between a reliable survey result and a result that is skewed can depend on seemingly minor issues of phrasing, question placement, etc. Social sciences research methodology and marketing research has given significant amount of attention to the presence of unintended (and sometimes intended) biases embedded in survey question design, which can lead participants to respond in a manner different from what they would do if the question had been worded differently.  Additionally, the length of a survey and the types of feedback options it offers can make a significant impact on the response and completion rate.

Effective surveys rely on questions that will produce quantifiable results to the greatest extent possible so that total results can be reported in a relatively objective fashion (for example, demonstrating the percentage of respondents who agreed with a statement and the breakdown of those responses by such factors as age and location of residence). However, in a public sector context, the option of open-ended written responses should be offered whenever possible, both because people may feel the need to respond in a manner that the pre-programmed response options do not permit, and because, having opened the gates to participation by Asking, not providing an open-ended response option would appear insincere  — and deprive the agency of some of the information to be gained from Asking.

Drawing conclusions from a collection of open ended responses, however, can drag a community into dangerous terrain if the comments are not understood in an appropriate context and used correctly.  Even for experienced and trained surveyors, it is easy to become disproportionately swayed by one well-written, pithy, angry or funny response, or to unconsciously give extra weight to a small number of comments that agree with your preconceived notions or preferences.  The risk in interpreting written comments, then, is that the project staff or elected official may create for him or herself a skewed internal interpretation of what “the public says,” mistaking a small number of comments that stand out strongly in her or his mind for a larger community consensus.  This is a difficult challenge to meet, and it is made more so by the ease with which hundreds of open-ended comments can be created and compiled in an online format.

In general, it is often best to present a collection of open-ended comments to decision-makers behind an introductory section that frames the common themes and overarching issues noted across the entire collection of comments.  An even better strategy would be to conduct sentiment analysis of the body of comments and share a summary of that as a framing to moderate interpretation of the individual comments (sentiment analysis is an algorythm-driven method for analyzing the opinions or emotions attached to particular words or concepts across a body of text.)

Text-dominated survey methods also pose significant challenges for people who have difficulty writing, whether that is because of lack of fluency in the language, physical difficulties in reading or typing long passages, or perceptual disabilities, such as dyslexia.  Additionally, many persons who are otherwise capable of communicating fluently in a text survey may not prefer to do so, and may choose not to participate rather than experience the annoyance and frustration of completing a text survey.  For these reasons, and because of the fact that many people interact with visual information more readily than with written information, survey methods that elicit responses to images should also be incorporated into Asking public engagement whenever possible.

Two common methods for Asking participation using visual information include the Visual Preference Survey and map or image mark-ups.  In both of these contexts, the usual methods for using the technique in person are directly adapted to the online context with relatively little difficulty.  Both, however, present additional challenges in interpretation when used onlinr: for Visual Preference Surveys, the difficulty results from the inability to conclusively identify the reasons for peoples’ choices, while most map-based Asking activities face challenges in terms of compiling results and avoiding the risks of over-emphasizing a small number of participants that may not accurately reflect the overall concensus.

A Visual Preference Survey presents a series of photographs or other images (typically of a physical site) and asks the viewer to indicate his or her preference for the setting portrayed by marking on a number line that extends from a negative number (indicating various levels of dislike) to a positive number (indicating varying levels of support).

A Visual Preference Survey works in an almost identical fashion online as off, but that means that it is also subject to the same limitations that have led some practitioners to challenge its use since it was invented in the 1970s.  The most significant issue with a Visual Preference Survey is that one can seldom be sure exactly what the viewer was responding to – did they like the design of the house, or did they like the tree in the front yard?  Did the negative response reflect the fact that participants didn’t like the building, or that they thought it was too big to fit in well with their own community as it exists today?  Did the cloudy sky in this picture, or the weeds along the crack in the sidewalk in that picture, lead people to give it a lower preference score, even though that was not the element of the photo that we wanted them to respond to?  Short of a detailed debriefing or a focus group follow-up, most visual preference survey administrators never get conclusive answers to those questions, which can make the use of their results problematic, and the potential for a much larger number of participants in an online visual preference survey means that this uncertaintly may also compound.

Similarly, a map-marking activity can also be structured to mimize the need for written comment.  In general, two types of map-marking online public engagement activities have been developed to date.  The older, and potentially more common, is a sort of perceptual mapping activity, in which participants may use a set of icons to mark specific locations as unsafe, valuable, in need of repair or redevelopment, etc.  In general, the responses are limited to the pallette of icons made available by the platform’s designers and selected by the agency, although some specific tools may permit a note of explanation to be attached to the “tag.”   Feedback maps of this type date (at least in concept) as far back as the Google Mash-up technology of the mid-2000s, although almost no one except for diehard technologists could use that system.

A second, more intuitive method derives from the architectural charrette method, in which Post-It notes are often used to attach comments, recommendations, etc to a map or drawing. In the online version, a virtual input can be dragged from a sidebar and “stuck” onto the map; unlike real notes, the online version can incorporate text, images, or even short embedded video clips.  In a situation where this method is being used with a small number of participants (again, after the model of a traditional charrette), the process of vetting and incorporating the feedback into the project can be relatively straightforward, but, as with other types of open-ended feedback, drawing defensible overall conclusions about the participants’ areas of agreement or consensus becomes difficult in the face of a wide range of individual inputs – and made all the harder by the variety of media that the participants may use.

One thought on “How to do Effective Online Public Engagement when you need to Ask”

  1. The company I work for is not very good at public involvement. I don’t think they’ve ever sent out a survey to anyone, which means they are missing a major opportunity. I know that I’m more likely to want to work with a company that reaches out to it’s customers so they can improve, and I think it helps the business. I think it’s time we hire someone to help us with this part of our marketing strategy.

Leave a Reply

Your email address will not be published. Required fields are marked *