With personal and often sensitive user information on hand, how should crowdsourcing intermediaries, communication platforms, and other civic tech organisations be approaching the issue of user privacy?
Luke Jordan is the executive director of Grassroot, a civic tech organisation that describes itself as a platform for community organisers, activists, and social movements to organise their neighbours, working towards the vision of “a nation self-organising from the ground-up”. To support this, they create and deploy mobile communication tools that let people create groups, call meetings, and even take votes and record decisions even from a basic phone.
They have a total user base of 35 000 users, with core users (with deep and continuous engagement) of over 4 000 — all of whom are involved in social activism and, in some cases, protest, making privacy a primary concern. We spoke to Luke about the development of their privacy policies, as well as the strategy and thinking behind their approach to managing user privacy.
Q. What did you set out to do? How has that shifted since your start?
Grassroot deploys mobile tools for community organisers in marginalised communities. Our constituency is field-based or community-based organisations. They are our users, and we build tools that make it easier for them to do the kind of routine, drudge work of organising or calling meetings, recording decisions. Community leaders and local activists use our app to create groups, recruit people to those groups, send out meeting notices, record decisions and take votes. That’s what we set out to do.
Obviously, along the way we’ve tweaked a lot. We’ve really tried to do user-focused design and iteration. We deploy a revision of the application about once every two weeks. This includes small tweaks and changes in wording. It is pretty much continuous. We are very sensitive to what users are concerned about.
Q. At what stage did you start thinking about privacy?
Privacy came up right at the beginning. Someone put it to us — and I think this is a good articulation of the issue — that we have the social graph of a lot of community-based activists in the application, but that’s what it requires to make it work. So obviously we have an obligation.
We were very sensitive to this from the beginning. It drove two major strategic decisions before we got to the policy, and I think those are more consequential. Anybody can write a policy, but it is when it impacts big strategies that it actually shows that it matters.
One is that this is the reason we are a non-profit company, rather than for-profit, because we made a decision never to do ads, because ads would require us to violate users’ privacy. The only way we could ensure that we would never do ads, is to make sure we had no fiduciary duty as a board to do so. This meant we had to be a non-profit. The second major decision was to blind ourselves to any kind of logs or activity going through the platform.