This selection comes from the upcoming book, Online Public Engagement, due out in 2016 from Routledge Press. You can learn more about the Wise Economy Workshop’s strategy for doing more effective public engagement — whether online or in real life — in Crowdsourcing Wisdom: a guide to doing public meetings that actually make your community better (and won’t make people wish they hadn’t come).
Of course, the most user-friendly public engagement interface will be of minimal value if you cannot readily create the experience and obtain the information you need from the software within the real-world constraints of your time, expertise, staff and capacity. Although it is sometimes easy to become entranced with the aesthetics of a particular interface or the uniqueness of a particular approach, it will be to your benefit to give close scrutiny to the administrative or back-end functions of the app or platform you are considering. Here are a few aspects to examine:
Appearance customization. The extent to which you can control the visual appearance of an app or platform depends on two related issues: the “skins” you can apply and the degree to which you can modify the provided skins; and the number and size of visuals, such as graphics and photographs, that you can insert easily. Skins are preset design configurations for fonts, text and background colors, icons, page layouts, etc. Within the settings or configurations, one can typically select from a number of design packages, and with one click the entire appearance of the app or site can be entirely changed.
Since all of the design work has been pre-set, a client without expertise in site design can easily select a skin with the confidence that the colors will be compatible, the fonts will be legible, and the page design will generally work. Most reasonably mature sites will have enough skin options to enable the administrator to select an option that will be compatible with other existing design materials (logos, report graphics, main web sites, etc.). However, a limited skin pallette or particularly unusual existing design materials could create a challenge for you, since you may not be able to find a pre-set option that is satisfactory. Given that skins are typically built into apps as a way to avoid costly and time-consuming customization, your options in such a case may be limited. If you have set design standards for your organization’s web presence that are embodied in a CSS file (a programming language used to set the appearance of a web site), you may be able to convince the app or platform developer to add your design standards to their master CSS, which would make those design characteristics available to you within the app or platform.
At the other end of the spectrum, some low-cost or free apps may provide no or limited skin options. In this case, it may be obvious to the user that they have moved from the agency’s site to something that is owned by someone else. In some communities, this visual disconnect may be of no concern, but in others it may not only prove visually jarring, but it may raise questions among the public participants about the safety of the site and the privacy of their responses.
Visuals and graphics. As we have noted throughout this book, use of graphics, including photographs, charts, maps, infographics, interactive media and video are becoming increasingly important to site and app users, especially as they continue to migrate the majority of their participation to mobile devices with smaller screens and touch interfaces. In most contexts, visuals increase conventional web engagement measures, such as page opens and click-through rates.
Apps and sites that run on older platforms may present a more text-heavy interface, allowing only small spaces for photographs and limited or difficult-to-use interfaces for adding videos or interactive graphics. Newer apps and platforms, on the other hand, may emphasize these types of content, which may present a learning curve for you or your staff and generate some additional need for training and support from the provider. Again, the challenge facing you revolves around striking a reasonable balance between the need to create an experience that invites your community to participate and your obligation to create an interface that you can manage, given your organization’s capacity and the assistance that the provider or others can make available to you.
Participant geography. This is an issue that will matter intensively in some communities, and not at all in others. Standard social media – type platforms are largely location agnostic – they may permit you to identify your geographic location, but you can still post and comment if you do not, or if you do not identify yourselve as belonging to the area that is the subject of attention. But for some local planning, development, policy-setting, and similar initiative, whether or not commentators live or work within a target geography may make a big difference between political and popular acceptance of the legitimacy of the public feedback, or an accusation that the responses do not represent “true” public opinion.
Online public engagement platforms and apps to date have developed a variety of strategies for addressing the question of a participants’ geography. Many of the simpler apps do not identify a participants’ geography at all, which others use self-reported information, such as a ZIP code or street address provided as part of a registration that the member of the public must complete in order to participate. At this time, one public engagement provider, Vancouver-based PlaceSpeak, pursues the most intensive strategy: the app uses a combination of self-reported information and other data, such as a phone number or an internet connection’s IP address, to verify and pinpoint a participant’s location precisely on a digital map.
As one can probably imagine, the degree to which you can verify a participant’s “claim” to a geographic area can be either valuable or unnerving, or both. One one hand, it may be perfectly legitimate from a public perspective to know whether a participant or a commenter is likely to be directly affected by a decision that impacts a specific area. On the other hand, participants who do not live in a specific area, such as former residents or nearby observers, may have valuable insights to offer, even if their official address does not fall within the subject area. Public agencies and organizations holding public meetings very frequently ask participants to identify their location of residence when signing into meetings, but making fundamentally the same request in an online setting can also trigger hesitation or resistence for two additional reasons. Not only are people often not accustomed to providing this information in an online opinion-sharing platform, but they may be hesitant to do so for fear that their information will be stolen, that they will find themselves on a spam mailing list, or that their anonymity or privacy may be compromised. This also ties into the question of public identify of participants, which was discussed in the previous chapter, and moderation, which is discussed below.
Moderation. Moderating online public participation can mean two things, one less commonly than the other. One form of moderation is similar to moderation of an in-person discussion: leading and guiding a real-time conversation between multiple participants by asking questions, nudging participants to stay on topic, rephrasing, encouraging quiet participants to be heard, etc. However, since most online public engagement does not happen in real time at this point, this type of moderation is rarely needed.
The more common meaning of moderation in terms of online public engagement is more targeted: monitoring for and addressing inapprpropriate online behavior. As we discussed in the last chapter, unpleasant, unconstructive and even threatening or intimidating online commenting behavior, while relatively uncommon can occur in an online public engagement platform in the same way that it can occur in article comments on a news site or a Facebook feed. Effective moderation requires three key elements: a statement of rules of behavior that participants must agree to before participating, a statement of penalties for infractions of those rules, and a method of identifying participation that violates those rules.
Most commercial online platforms use some combination of in-person review, peer review and software that scans new comments for specific words or phrases to try to identify unacceptable comments quickly (ideally, before many other participants have seen them). The more mature online public engagement apps and platforms typically maintain staff or contractors who are on call to monitor flagged comments around the clock, since online public engagement can occur at any time. When identified, comments and commenters that are determined to violate the rules become subject to penalties, which typically range from removing the comment, to placing the perpetrator’s account on a probationary status, to blocking the participant.
As you can imagine, moderating comments can become tricky in an online public participation context, for the same reason that disruptive or incendiary behavior can be hard to manage in an in-person setting: public sector agencies typically perceive an obligation to protect free speech and will tend to err, to a point, in the direction of permitting problematic behavior rather than risk being perceived as being tyrranical. However, even in a public meeting context, certain levels of behavior typically necessitate rejection or removal, and the same general principle applies to online public behavior.
Most online public engagement platforms establish rules similar to those that cover public meeting behavior: prohibitions against hate speech, threats, personal attacks on non-public officials, etc. On the rare occasion when an infraction occurs, the site’s adminstrator may remove the comment and send a message to the participant identifying why the statement was unacceptable (that is, which provision of the rules for site use had been determined to be violated) and what additional penalties may be incurred if similar comments are posted.
You may note the term “on the rare occasion when…” in the last paragraph. Despite the popular media furor around trolling and uncivil online behavior, the experience of most commercial online public engagement providers indicates that the number of comments requiring any moderating attention at all comprise a tiny fraction of total comments. Although some providers emphasize moderation (especially those whose marketing focuses on confrontational environments, such as zoning appeals), most consider it a necessary element of doing business, similar to maintaining servers, and do not emphasize it.
In general, sites and apps that use structured methods to elicit public participation and do not rely on open-ended comment fields have several significant advantages: not only do trolls and others intent on being hateful find fewer opportunities to spew off-topic vitriol, but users also find the overall process more constructive, more meaningful and more engaging than when they are faced with a torrent of open-ended comments.
Most commercial online public engagement apps and platforms maintain the moderation function in house, treating it as an element of the software application rather than as a responsibility of the client organization. This is generally prudent on a variety of fronts, including maintaining objectivity and avoiding any perception of bias.
Reporting. For organizations that are new to online public engagement, it can be easy to become enthralled by the novelty of a survey tool, the aesthetic appeal of an app’s user interface or the success story that they hear from another organization. But one critical element that can be easily overlooked in the evaluation stage is the question of reporting. As noted previously, most mature online platforms place less emphasis on long-form written comments and rely on feedback methods that are more readily collated and summarized, such as surveys, simulation results, upvoting and others. This is important for two reasons: first, it works against the human tendency to read a string of narrative comments and remember out of it only those comments that either surprised us or confirmed our pre-existing conclusions. Second, it gives us the opportunity to more clearly understand the full scope of the public’s opinions and priorities in a relatively objective manner by quantifying or visually demonstrating areas of concensus or disagreement.
To achieve these benefits, however, the results of the public participation effort must be able to be understood comprehensively, fairly and quickly, particularly if it is to have any significant impact on public policy and decisions (which is what the participants wanted). As a result, the online public engagement app or platform needs to enable a relatively frictionless, and preferably highly visual, download and summary of the participation. Charts and infographics should dominate the output; the full results should be available, but an executive summary will often be helpful.
Technical Support. Finally, serious consideration should be given to the type and extent of technical support that your online public engagement efforts are likely to require, both at the beginning of the initiative and during its operation. Commercially-available online public engagement apps and platforms, particularly those that operate on an SaaS model, often limit support to a Frequently Asked Questions page and a comment submission form. While many can respond quite quickly, especially during their location’s traditional business hours, few have the capacity to staff around-the-clock live help, either in person or online. With smaller providers, you may find yourself exchanging messages with one of the developers, who may investigate and address your issue personally, while larger companies may employ customer service staff who can walk you through common fixes but have to hand off bug reports or service failures to a technical specialist. The largest providers, or those attached to nonprofits with a civic technology mission, may provide occasional user training via webinars or tweetups, and it is possible that large providers in the future may borrow a page from other online service providers and begin hosting user conferences.
As we discussed in a previous chapter, part of the business case for devising an app-style delivery method for an online public engagement tool is its simplicity and efficiency — instead of customizing for every situation, the developer creates a small number of basic options and limits the user’s ability to change the function and presentatation to a limited number of predetermined, thoroughly vetted choices. Ideally, an app approach also limits the amount of onboarding or start-up training that new users need – with a few introductory slides or a brief video, users should be able to import content, set use conditions, select preferences, etc. Although consumer applications can often work in this manner, effective use of online public engagement platforms and apps may require a somewhat higher level or orientation, particularly if the product includes multiple engagement tools or is designed to enable complex content, like interactive graphics.
Social media connections. Just as users should be able to share elements that they find interesting or useful on social media, the adminstrative functions should make it easy to share new engagement opportunities, trends in responses, updates and announcements to your organization’s social media networks. Again, shares should use the social media platform’s reach to draw viewers to the public engagement site, not ask them questions or invite participation that they may then leave on the social media site itself.