Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could measure the timing over the internet.


Users aren't interacting directly with the storage layer so any timing attack via the network is going to be once or twice removed. Can attackers really gleam useful and mount a successful attack in this type of setup?


This is almost certainly true in practice, but it's a big risk, compared to the risk tolerance that we usually engineer into crypto. For comparison, suppose someone was suggesting: "Why not use 80-bit keys instead of 128-bit keys? No one in the real world can brute force 80 bits, and we'll save on storage." Yes, that's true, but it's taking a relatively large risk for relatively little benefit. Hardware will get faster over time, and an extremely high value target might justify an extremely expensive attack, etc etc. We prefer 128-bit keys because then we don't even have to consider those questions. I think timing attacks are similar: Yes they're very difficult in practice, but they raise questions that we'd rather not have to think about. (And which, realistically, no one will ever revisit in the future, as hardware evolves and new APIs are exposed.)


I always imagined key size to relate to computation cost and not storage — what algorithm are you referring to?


The point is that you need more bits to store a longer key, but the storage space saved is very little in this case compared to how much easier it is to crack.


Sure, but a difference of 0.1% of storage to go from 80-bit key to 1024-bit key for 1 Megabit of data (that's 118 bytes out of 128KiB), or 0,00001% for 1Gbit (128MiB) seems not worth raising as a concern.

(I've chosen example numbers just to make calculation trivial)

So I can't ever imagine storage size being the driver for choosing the key size, though from the other threads, it seems that there are algorithms that do have a storage overhead that might be related to key sizes.


YES.

It requires statistical techniques to remove the noise, making the attack harder, but not necessarily infeasible.


Is that really the case though when the differences in computation would be measured in microseconds, but the network noise would be in the order of milliseconds?



I don’t know about that... in the paper the client and server are on the same network. It would be very interesting to repeat this study using faster processors (which will make this signal smaller) and over the public internet (making the noise bigger).


This is why constant time functions are used in cryptographic implementations, even over the network.

These are called timing attacks and they're less common now because professional cryptographers know how to deal with it. But this is very much a perfect example of it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: