Rethinking the Open Data User Experience
“If you build it they will come,” may work in the movies, but it doesn’t necessarily apply to government open data portals. There are significant reasons that governments make their data available to the public: to improve the quality of life in their communities, to spark a thriving business community, and to ensure the government is operating effectively by making data-driven decisions. Although these worthy goals can be accomplished with the best civic portals, just putting information online without analysis or context will not guarantee citizen engagement, won’t make a community run better, and can’t magically foster a robust environment ripe for innovation and growth. Public open data must be presented in a format that is easily understood by both citizens and employees, in context to related pieces of data, with dynamic and clear analysis.
Useful and Usable
In early 2014, Socrata analyzed the traffic patterns of government open data portals and discovered that government portals were not getting the levels of traffic and civic engagement that their governments expected and wanted.
Why was it so hard for citizens to find answers to relatively simple questions like “Has crime in my neighborhood increased or decreased over the last 10 years?.”
Why didn’t a relevant dataset show up at the top of Google search results and give you a straightforward answer when you click on it?
The most important reason for that was that the presentation of that data didn’t answer common questions very well and wasn’t engaging enough to trigger a lot of social sharing and other mentions on the Web. Much of the data was posted and viewed in the form of spreadsheets that immediately turn away many citizens who don’t have time for or aren’t interested in wading through spreadsheets of data or analyzing the results. Socrata believes that technology should enhance — rather than hinder — the ways users experience and visualize data.
So we set to work to improve the user experience and make data both useful and usable. Here’s an idea of how we did that over a period of more than two years to develop Socrata’s new, robust, and more intuitive open data portal and Data Lens user interface.
1. Understand Your Users
Starting in April 2014, we started to rethink the entire Socrata open data user experience to create a more seamless way to publish your data, tell stories with it, and engage and educate citizens. We first refreshed our knowledge of our open data user personas.
Personas are an essential tool that allows developers to get to know the characteristics and information needs of your users, think about their behavior in detail, then use that knowledge to design and develop products that work for them. We developed a set of personas, then asked customers and citizens whether we got them right. This validation stage revealed that the biggest unmet need in civic open data portals (both ours and our competitors’) was to allow citizens and government employees to seamlessly explore data to find direct answers to their questions or get interesting insights, rather than reading a scrolling through long spreadsheets.
2. Develop Concepts
With this knowledge, we got to work and looked at the entire spectrum of approaches — ranging from traditional methods of displaying data, to more visual and engaging tabular views, to highly dynamic cross-filtering data exploration experiences. One direction that clearly stood out was the concept of getting immediate insights and answers to questions at a top level with the ability to explore the data in more depth.
After building an early prototype, we performed usability tests and discovered that this model was overwhelmingly favored by customers and citizens.
3. Test Multiple Designs
The next step was to create high fidelity designs and prototypes for larger-scale usability tests that would collect more detailed feedback to guide the Engineering team as they began implementation. In order to deliver this experience we had to make substantial engineering investments in terms of performance and ability to automatically detect the type of data and visualize it in the most appropriate manner quickly — all without a lot of extra work by users.
This example shows an early lower-fidelity prototype that was used to test how users interacted with this model:
4. Test the Usability and Functionality
From there, we conducted more usability studies to get direct feedback from customers and citizens. This study validated that we were heading in the right direction, ironed out any remaining usability issues, and identified missing functionality. Users in these tests were able to complete tasks 80 percent of the time — a very high rate for a complex Web application that users were seeing for the first time. Not only did users effectively and efficiently find the answers they were looking for, but they were able to explore the data and draw their own conclusions and were engaged in doing so. One participant understood the interface immediately; wondering why he saw a spike in a line chart, he exclaimed “Oh, this must be Hurricane Sandy!”
5. Test the User Interface and Features
Over the next several months, we worked on improving the user interface and task flows, building out the general feature set to a minimal viable product level, and continuously showing the experience to customers to get and integrate their feedback. After that, our team conducted another usability study around product branding to ensure we got the look and feel right. In this study, four different designs of the overall look and feel of the product were tested, including variations in the data visualizations.
These examples show the four different data visualizations we tested:
The outcome was a clear preference for concept 1 in terms of look and feel. It’s clearly modern and fresh, but not so fashionable that the branding starts interfering with the actual data visualization and general ease of use.
6. Conduct a Competitive Analysis
Finally, we conducted a large-scale benchmark study to examine the ease of use of our redesign versus a competitor and against our previous product. The results were overwhelmingly positive: users could complete tasks 2.8 times more often in our new user interface than they could in our old interface, and 4.4 times more often than in our primary competitor’s product.
This is a huge game changer that will transform the way citizens can interact with government data to make better decisions. It sets a new standard for how governments themselves can mine insights from their own data to help them make the best data-driven decisions.