Mastodon Twitter Instagram Youtube
Jan 13, 22

Signal Warning?: Why Moxie’s Departure is Not the End of Signal

Technical discourse almost always generates an odd sort of distortion when it collides with life outside of disciplinary separations. This is very much the case when it comes to radicals and information security. Communications, computer science and cryptography are highly complex, to the point that even people working in the security industry full time have difficulty grasping the complexities from time to time. As we saw with the Snowden leaks, the combination of highly complex technical content and the possibility of danger in the form of surveillance, tended to generate a discourse grounded in hyperbole and conspiracizing. This approach often led people to either attempt to completely go dark, or to modify their practices base on false, misconstrued or misunderstood information.

Over the past series of years this phenomenon has emerged in relation to Signal and Tor, very specifically. Without getting too into the technical elements of each, we can definitively say that the conspiracies about weak encryption, protocols being “broken,” secret government backdoors and so on are commonly held, but ultimately damaging and false arguments. This tendency is gaining traction and accelerating in relation to Moxie Marlinspike leaving Signal.

For those of you that are aware of the history of Signal, it was developed by Moxie and others, with many of the early adopters being within the anarchist milieu. The adoption of Signal is directly a result of Moxie’s long term connections to anarchist communities and the sheer strength of the cryptograhic model. These two elements are fused in the minds of many, and the requirement of trust for cryptographic systems is essential, but these concerns need not be inherently joined together, and a shift in an organizational posture does not necessarily mean a shift in technical systems.

So, when we are thinking through Signal, and whether it is secure even if someone that many identify as “one of us” is no longer the one making day to day decisions (Moxie will remain on the board of directors), we have to separate these organizational and technical considerations. On a purely organizational level Moxie leaving Signal definitely severs a connection between that project and the anarchist milieu in general. After this shift it is no longer the case that we can have this clear sense that everyone involved in the project is “on our side.” Understandably this has generated a significant level of concern. In order to address this concern it is important for us to discuss both the technical elements of integrity and security, at a high level, as well as things that we should be looking for as far as changes to the project that could be concerning.

On a purely technical level, without getting into the complexities of ratcheting key exchanges and their role in Signal, the absence of Moxie does not lead to a change in the code base. We have to keep two things in mind on a technical level. Firstly, the cryptographic structures used in Signal have been tested, audited, hacked away at, and have been found to be incredibly resistant to decryption. It is important to keep in mind that there is no such thing as unbreakable encryption (part of why good opsec is still necessary), but on a practical level the cryptographic structures within the Signal protocol are resistant enough to make the attempt to decrypt these messages impractical, regardless of the amount of computing power one could mobilize. At the end of the day cryptography is math, and mathematical realities still manifest in the same ways, regardless of the author of the code.

Secondly, and this is incredibly important, the project itself is open source. Just like Tor, or any other encryption centric system worth considering, Signal is a project that publishes it’s source code, allowing anyone to look at the code, modify it for their own use, research its internal workings and so on. This means that any changes to this code base would be noticed by researchers who already track these changes. The code for Signal, as well as its runtime, are constantly being audited by professionals, many of which are sympathetic to our objectives as a community. Often the results of this research is published openly, and Signal has a wonderful history, specifically in the recent past, of disclosing reported vulnerabilities, describing why those vulnerabilities exist and what they have done to patch these issues. As such, there is not really any way to “sneak” some sort of backdoor, weakened encryption algorithm or other malicious content into the code without that being immediately noticed.

But, even though the technical elements of Signal are strong today, the central concern seems to be about whether it can be trusted tomorrow, and this is an organizational question and a question of trust. Signal benefited from an organic chain of trust that vouched for those involved in the project in its early days. With Moxie departing, that chain of trust is broken. Though many have spoken highly of Brian Acton, who is interim CEO and one of the founders of WhatsApp, and his dedication to privacy, needless to say, he is not someone that comes from our communities, and as a result we cannot place the same trust in him, or whoever comes after. To ensure the integrity of the project, without needing to dive into thousands of lines of highly technical code, a series of indicators can be monitored to identify changes in security posture or mentality.

The first is feature adoption. So far Signal has done an amazing job of heavily researching, and at times figuring out completely new approaches to, the implementation of features. This research gets encompassed in blog posts that they release around updates. A really good example of this approach can be found here.

If we start to notice that these updates are not being issued, or if we start to notice a bunch of features being implemented rapidly, without the time to research their security implications, that is a potential cause for concern, and would indicate a shift in internal process.

Secondly, we need to be listening to security researchers as they do assessments of the platform and code base. The information security community is filled with highly intelligent, skilled and dedicated researchers who are trying to find ways to use technology to further movements for liberation. As a result, apps like Signal are heavily audited, with findings often publicly available. This allows us a glimpse into issues with Signal and any patterns that may develop which would indicate underlying issues, poor coding practices or removals/weakening of security features.

Finally, Signal has a pretty good history of being transparent and responsive to reports about security issues. There are a lot of conspiracy theories out there about Signal, and plenty of poorly informed “hot-takes” being issued from unreliable sources. When issues do arise that can be validated by legitimate researchers, Signal often reports it immediately, does so very publicly and often issues a patch soon afterward. We can see this in the amazing post they wrote about defeating the Cellebrite UFED, which can be found here.

If they stop reporting vulnerabilities, the security community will get very up in arms about it, and I will definitely hear about it. If Signal stops being proactive about these questions, and if the security community needs to hold them to account, that would be a concern.

All this is to say that we need to, as with all of our analysis, depart from a place of information and the evaluation of information, and not partial information, hyperbole or emotion; this is especially the case with technical systems. There is a lot to say about the subject of intentionality and the programming of machines. For now, however, it is enough to simply say that computers do not have emotions; they are silicone, metal and fiberglass, and like all inanimate objects, it cannot have intent on its own (the intent and socialization of developers is a different question).

Therefore, on the level of encryption and the use of encryption we have to fundamentally analyze our operational security along two converging lines. On one level, there are the technical elements of the system, what it can do, what it can’t do and how it does it. This can be complicated to understand, especially with something like Signal, but there are plenty of resources written for people that are non-technical or less-technical explaining these processes (the Electronic Frontier Foundation has some good resources on this). On another level, we have to really develop an understanding of what problem any specific tool is meant to solve, or what risks it is meant to mitigate.

The reality that Signal is no longer something that some of us feel politically connected to does not also imply that its security is compromised or will be in the future. The removal of Signal from organic chains of trust merely repositions our relationship to the system. Just like any third party app, Signal no longer benefits from being something uniquely trusted for political and social reasons, and now must be analyzed for what it is, a really solid system with mathematically verifiable encryption that is sufficiently strong, and a backend that does not keep meaningful metadata about users and usage.

On some level this is not a bad thing. Now, with the veneer lifted, our ability to analyze Signal, and to evaluate its usage within our contexts can begin to occur outside of any distortions that trust can sometimes generate. Now we have to look at the app and its underlying protocol as they are, as code running within a computer, with all of the benefits and limitations that this entails. This is far from the end, and is not even, at this point, even moving in that direction. But, like all technical systems we need to approach them with information and suspicion.

These sorts of scenarios, and the fervor that they generate, underscores a core lesson; all tools that are run by third parties need to be approached with suspicion. Systems have vulnerabilities, service providers are sometimes untruthful about what data they are collecting, email providers scan our messages, there are a thousand risks, and none of them can ever be said to be definitively dealt with; this applies just in the same way to every third party system, whether it is Signal or Gmail. No tool is perfect, no tool can prevent every risk or provide every capability. Even with powerful tools, like Signal, the fundamentals of operational security always must be kept in mind; in other words, even if its encrypted, don’t say illegal things on electronic devices.  As the world changes, and as the dynamics of revolt and repression evolve, the risks we face and the tools we use must evolve accordingly. But, this evolution always needs to be carried out with a clear eye toward the use and limitations of any tool we use, whether that be a hammer or Signal.

Resources

https://ssd.eff.org/

https://www.privacytools.io/

https://freedom.press/training/

https://ccrjustice.org/if-agent-knocks-resource

https://cldc.org/security/

photo: Markus Spiske via Unsplash

Share This:

It’s Going Down is a digital community center from anarchist, anti-fascist, autonomous anti-capitalist and anti-colonial movements. Our mission is to provide a resilient platform to publicize and promote revolutionary theory and action.

More Like This