You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
architecture/docs/threat-models.md

3.3 KiB

Table of Contents

Goal

To require users to have as little trust into this system as possible.

To reduce the risk of anybody (including an admin / maintainer) getting anybody else's PI as much as possible.

Additionally, the server should never under any circumstances handle any user private keys (including shared keys), they should all stay on the client.

And all data should be scoped to this instance of the system (all hashes should be salted with its key), to make it unusable on other instances.

Threats

Incorrect use of the system

No use of the system API by attackers should expose anybody else's data.

All requests should be signed.

Brute-forcing sympathies

One of the attacks of that kind would be an user submitting their sympathies to literally everybody else, in order to extract all the other users' data of the kind "do they like me?", which would defeat the purpose of this system.

One way to combat this would be to introduce rate limiting, so that every user can only have no more than a fixed amount of non-mutual sympathies at any given moment, and so that they will only be able to remove a non-mutual sympathy after at least a fixed amount of time has passed.

For example, that could be at most ten non-mutual sympathies, and at least a month until a non-mutual sympathy can be removed, freeing one of the ten slots.

Colluding

Another attack would be two users, X and Y, colluding with X submitting their meta sympathies to Y+everybody else, and Y submitting their meta sympathies to X+everybody else, in order to extract all the other users' data of the kind "do they like X and Y simultaneously and willing to find about meta sympathies?"

This is probably not a very important issue.

MITM attacks

Should we really be concerned about these, if both the client front-end and API are served over HTTPS?

Database leaks

A leaked database should expose no identifying information.

Database + system keys leaks

A leaked database, even if it leaked with all the private keys used by the system, should expose as little information as possible.

Exposing information of a kind "this user has N non-mutual sympathies that were created on these dates" is probably unavoidable: the system has to keep track of that information in order to prevent brute-forcing.

Exposing information of a kind "this user logged a sympathy towards you" to someone with their own private key is unavoidable too: they can just emulate the entire system in a sandbox and brute-force that information.

No other information should be exposed.

Log leaks

The system should not store any logs.

Evil admin / maintainer

Access to the database and system keys

This threat is identical to database + system keys leak.

Backdoors in the code

Since all requests to API are signed, that means an admin, if they inserted some kinds of backdoors, can always know what users have been sending what kinds of requests.

No other additional information should be exposed if admin does not have an access to someone's private key.

Colluding with an user

If an admin colludes with some user (who provides their private key), they will be able to obtain all the information that concerns this user, if just by brute-forcing it.

We should think about how to make this as painful as possible.

No other additional information should be exposed.