I think I have a better scheme. Say you have a 10 bit keyspace or something, and then encrypt a very large number of times with random keys. You don't have to perform as much computation as your adversary. By the law of large numbers the probability to solve all of the puzzles in a much shorter then expected time is low. And it is much less parallellizable then just one encryption with a random key.
Interesting idea, I like the "as much certainty as you want" with the probability.
Why can't this be parallelized? Sure you have to work on one key at a time because they are sequential, but 1000 machines can be cracking that one key. Every key cracked would be distributed and the cluster would start in on the next.
It's a little more complicated, but I'm not seeing how it's really any less parallelizeable.
I think, but am not certain, that with such a small keyspace the IO cost would dominate if you tried to parallelize it. (Assuming each decryption attempt is rather quick).
That sounds risky. You're using a cryptographic primitive in a non-standard way, and in danger of having created a weakness.
For example, are you sure that your cipher doesn't have the property that for any text.Encrypt(key1).Encrypt(key2) there's an equivalent text.Encrypt(key3)? Some ciphers have that property (e.g. the one time pad). That would reduce your security from 10^N to... 10.
The article is about research on time lock systems, not about someone actually building one. Thus I think a speculative, "risky", novel idea is appropriate.
I had the same thought initially, but the problem I see is that breaking several keys in the same keyspace shares work. e.g. if you want to invert a one-way function, that's a tough problem. But if it's a 10-bit function, you can just create a full table by brute force.