whatsapp – Is there any evidence that (mainly US) intelligence services have especial difficulty reading either Telegram’s or Signal’s encrypted messages?

Preface: the question being asked here is summed up in a single, lone boldfaced sentence at the end. Everything else is caveats and background. How isn’t this question clear and focused?

I imagine like either leaked or otherwise published official memos on efforts and signals intelligence programs to intercept and crack different types of communications either lamenting or chronicling the technical difficulties in doing so and resultant “blind spots”. Or, for example, the spectacular national FBI/Apple controversy in which Apple’s compliance or non-cooperation proved ultimately moot as the FBI payed some Israeli consultancy some insane sum to license their encryption cracking package which was able to break it (so we know that these exploits are not only possible, but actually exist).

WhatsApp obviously employs Signal’s publicly released encryption code, but its own base is not only proprietary and unaudited, but also is the subject of an embarrassingly endless and regular stream of exploits – suffice to say that security is not its main concern.

Further, it contains such an appalling design oversight as to make it suspicious as to whether it is designed intentionally with this objective in that if cloud backup options are not manually disabled, then all of one’s correspondence with all of one’s correspondents, regardless of one’s correspondents’ own settings, is uploaded in plain text to one’s storage provider.

Signal is FOSS, and I suppose fully audited, but distributed (and compiled? Signed?) by Apple or Google, depending on the delivery platform. So they could be tampered with by slipstreaming maliciously modified binaries into the distribution channels if one was targeted by court orders and they were ordered to do so.

But even if the binaries as installed are theoretically (ie, by design) meant to be trusted, like signed by the developers’ own keys, iOS is so opaque that if there were backdoors then I don’t think that anyone would likely even know, especially if they were ordered to target you for malicious/covert surveillance. Presumably the NSA’s army of security researchers also have genuine/innocent/candidly overlooked zero day vulnerabilities (in addition to those submitted by independent individuals for cash bounties) in their catalogs at their disposals, too, in the case of both iOS and Android, so that regardless of how secure either Telegram or Signal themselves are in message transmission security, the decryption keys can still be compromised through vulnerabilities in the underlying operating systems.

Between Signal and Telegram, I suppose that last I looked into the matter, both are open source, but only Signal’s codebase was audited. There are concerns about Signal’s substantial funding by the American foreign policy establishment, and Telegram’s Russian origin is perceived in contrast as a more independent provenance, yet nobody ever points out that the Russian backers of the app happen to be virtually as pro-American aligned as Russians these days get.

Telegram had a controversial gung-ho attitude about “rolling its own” crypto functions which many had felt was suspicious. They claimed that it was due to performance concerns with Moxie Marlinspike’s librarified Axolotl, which, if honest, precludes any malicious intent to covertly compromise the app’s security. It is more robust with a more varied feature set, and it does seem more innovative as well as configurable than Signal. However, people complained that its crypto components were not audited like Signal’s were, yet the Telegram developers seemed very confident and proud in their code’s integrity, and, as I recall, they said that audits to validate their confidence would come in time. This was years back now. But there was still a further issue that while Telegram was associated with end to end security, by default the chats were actually not encrypted, very arguably coaxing users into a false sense of security. Perhaps the audit has now been done, and perhaps chats are now end-to end secured by default.

Regardless of speculations as to the implications of all of these factors on actual security, what evidence do we have of the state’s abilities in actual practice to intercept and read messages in all of these apps, comparatively and respectively? For example, while we know that it is of course technically possible for even non-smartphones to be remotely activated as listening devices, through the account in Murder in Samarkand, we know some contexts and the conditions and levels of ease or frustration with which it is actually done.