Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save EthanHeilman/6a31ac9a84787d386b9936f4f2076823 to your computer and use it in GitHub Desktop.
Save EthanHeilman/6a31ac9a84787d386b9936f4f2076823 to your computer and use it in GitHub Desktop.
8th of September
IOTA team has already responded to the paper published by Neha Narula.
It was me who created Curl and IOTA signature scheme in those old days when there was no IOTA Foundation.
I feel obliged to provide my own response, but it will take several days.
To speed-up the process I'm publishing my letters sent to Neha's team, they contain a lot of technical details which will be helpful to those who understand IT and Cryptography.
I've removed the words written by the others, so I don't need to ask them for a permission (which would take a lot of time to get).
Spoiler for those who don't like reading walls of text:
For more than a decade I have been working on techniques of open-source software protection.
Russian-speakers can read my old article from year 2000 (https://www.kv.by/archive/index2000491105.htm), that is the only public sign of my researches on that topic, soon after my know-hows became my livelihood so I ceased publishing public.
In 2013 I created the first full Proof-of-Stake currency and protected it with my novel techniques against cloning (https://www.nxter.org/fatal-flaw-in-nxt-source-code/).
Those who knew me as BCNext were sure that I would do the same trick to protect IOTA, some people even approached me asking about that.
Remembering how quickly Nxt protection was disarmed I was keeping in secret the fact of existence of such mechnism in IOTA.
I was pretty sure that the protection would last long time because it was hidden inside cryptographical part and programming skills would be insufficient to disarm the mechanism.
But nothing lasts forever and finally the copy-protection measure was found by Neha Narula's team.
My letters show how that happened...
Sergey Ivancheglo aka Come-from-Beyond
------------------------------
15th of July
------------------------------
Hello.
Thank you for your interest to Curl and IOTA. I read your analysis, the phrase “[REMOVED]” is really interesting. I’d like to ask how many transform() function invocations you need for that. It’s not clear if you meant only breaking collision-resistant part or one-wayness too, it’s pretty important to know because the security of IOTA signatures requires only the latter. I wish you have contacted me in advanced regarding the other attacks, what you identified as weaknesses is features added intentionally.
Below I’ll explain every attack/feature in detail, but before that I’d like to remind about Keccak padding rule. I didn’t verify it but I suspect that without the padding rule Attack 2 (Collisions resulting from failure to use message padding) is applicable to Keccak too. Just like Keccak, Curl is based on sponge construction, unlike Keccak, Curl has the padding rule moved outside to enable things, not possible in Keccak. One of those feature is an easy-to-find X = hash(X). This X is used as NULL transaction hash in IOTA and also as the genesis transaction, because it references itself. IOTA transactions don’t need the padding rule, they all have the fixed size of 8019 trits.
Let’s look at Attack 1 based on prepending zeros. This feature is needed for IOTA to save resources for broadcasting/storing of transactions with short payload. Because of this the payload part (named “signatureMessageFragment”) goes in the very beginning before the header part. A single transaction can hold 6561 trits of payload, if a payload is just, for example, 486 trits then we can fill the first 6075 trits with zeros and save more than 75% of bandwidth/storage without changing anything else. The transaction hash will still be the same.
Attack 2 scenario based on appending extra trits doesn’t affect IOTA negatively nor it gives any benefits. The transactions have the fixed size and data stored in the payload will use the padding rule explained below.
While I was using “padding rule” phrase, technically Curl-related method used as a countermeasure against padding attacks, probably, can’t be called “padding rule”. Before absorbing a message of a variable size we are supposed to initialize the internal part of Curl state with the size of the message. Giving a user write access to the internal state may be a not very good idea in some use-cases, so we are considering replacing this method with absorbing the size trits before absorbing the message. In either case I believe the both attacks are counteracted, aren’t they?
Now I’d like to move to Attack 3 (Prefix-zero non-randomness). You, probably, already noticed that the code of Curl is very simple. Making an algorithm as simple as possible makes analysis of the algorithm simple too (in most cases). Curl transformation function has such feature: the result of an analysis of a single output trit can be applied to all other output trits with a simple shifting. I treat it as an advantage, while most of the hashing algorithms are “protected” by obscurity of their internal mechanisms, Curl doesn’t have such drawback. “99999” in that very place of the output can be met in 1 case of 3^(5*3). The chance to see at least one occurrence of “99999” is even higher. I think that “[REMOVED]” phrase needs a better proof than few anecdotal occurrences. Personally, I running statistical tests detected no correlation between input and output trits, of course, there is always a non-zero chance that my tests had a bug. Regarding “[REMOVED]”, could you provide more details? It’s unclear what was meant. Human brain is notorious for seeing patterns where they don’t actually exist (like seeing faces in electricity sockets), a quantification of the patterns would be very helpful.
In the end of my letter I’d like to address your recommendations and suggestions.
“[REMOVED]”
Curl is based on the well-studied sponge construction. All the requirements are satisfied thus making Curl as secure as its transformation function. The function passed all standard tests (avalanche, trit-independence). Of course, vetting is a very important part and you are doing it now, so before rushing to follow the recommendation I’d like to get your answers, particularly about “[REMOVED]”.
“[REMOVED]”
We need to double-check the weaknesses before any publishing to ensure that what is being published is indeed based on sound evidences. We would love to carry on this conversation, would you guys be fine with being invited into the IOTA Foundation slack to discuss further with the rest of the team?
“[REMOVED]”
IOTA project doesn’t wish to use a cryptographic hash function different from other cryptocurrencies. It wishes to use a function which is fast, energy efficient and tiny codebase-wise.
“[REMOVED]”
This suggestion is not compatible with the wishes above, so first we are planning to assess severity of the weaknesses found by you.
Looking forward to hearing from you soon,
Sergey
------------------------------
15th of July
------------------------------
Hi Ethan,
Unfortunately, we don’t have the documentation you asked for, it’s work-in-progress. I checked the paper you linked to and think that we can’t rely on it. Unlike conventional signatures, IOTA signatures don’t need to be non-repudiable and this relaxes security assumptions. I “see” that one-wayness is enough for our case but it’s so obvious to me that I have no idea where to start from to show this to the others. I’m afraid I can’t provide a convincing proof within a reasonable timeframe. While it’s hard to prove that unicorns don’t exist it’s pretty easy to prove the opposite if we have one. Maybe you could show an example of a successful attack on IOTA signatures assuming that we use a one-way but non-resistant to collisions function?
I agree that even perfect statistical properties of a transformation function are not enough to accept it as secure. At this point only unsuccessful attempts to break the function can increase our assurance but we’ll likely never reach 100% anyway, waiting indefinitely wasn’t a good way to launch IOTA so at one point we decided to do it.
Not sure if David told you, the current version of Curl will be changed a little. Instead of X = F(A, B) we’ll be using X = F(A, B, C) where X is an output trit and A, B and C are input trits. Old design doesn’t provide good mixing, for example, after 10 rounds we still don’t get every output trit dependent on every input trit (only 718 of 729 trits are included, if I recall the number correctly) and the 11th round is required for 729 of 729. We also need to add at least 4 (= log2(9) rounded up) extra rounds to cover all input combinations because some combinations give an easy collision. 11+4 gives the lower bound of 15 rounds after which finding a collision becomes non-trivial. I’m curious if this 15 is the same 15 from your “[REMOVED]”, is the 15th round included or not?
Sergey
------------------------------
23rd of July
------------------------------
Hi,
Yesterday David promised to provide a thorough response. Unfortunately, I still haven’t got an answer to my question from the very first letter, without that the leap of faith is required to follow your recommendations. The question was:
How many transform() function invocations do you need to “[REMOVED]”?
(You have broken 27 rounds too, the corresponding number of transformations for 27 rounds together with 24 rounds would let me assess if your analysis is more efficient than the analysis utilizing properties of function F(A, B).)
PS: By the way, please, use “Curl-P-27” name instead of “curl” to conform to the naming convention where “P” stands for “prototype” (i.e. Curl version with 2-argument truth table function instead of the 3-argument one) and “27” shows the number of rounds in transform() function.
Sergey
------------------------------
24th of July
------------------------------
Hi Ethan,
Thanks for the reply. I wasn’t asking how many queries to the transform function had been necessary to prevent diffusion to 15-th round. My question was aimed to assess complexity of your attack, but now I see that I misunderstood you. I was thinking that you had found an attack allowing to generate an arbitrary collision (“selective forgery” in the classical terminology), after reading your extended explanation I see that it’s not the case.
> [REMOVED]
If you look at Keccak round function you will see that it diffuses inputs with a higher strength than Curl round function, a single Keccak round equals to several Curl rounds. Fine-graininess of Curl gives us what Keccak can’t give – fine-grain control over performance and energy consumption. We contacted you ~3 months ago regarding reviewing Curl, I wish you would have replied back then before starting your analysis, this would save a lot of time for you… In order to ensure no more time gets wasted on either side, I want to give a basic overview of Curl, which I believe will clear up some of the misunderstanding that seem to be present in our dialog.
Curl is a hash function based on the sponge construction. Its round function is very simple to allow easy algebraic analysis of Curl structure which is very important because we need to get lower bounds on the number of the rounds in different use-cases. In other words, we could use 27 rounds for one-wayness, 6 rounds for string hashing, 45 rounds for collision resistance, 21 rounds for key generation, 39 rounds for pseudo-random number generation, you’ve got the idea. This feature is so important that it’s reflected in the name of the function – Curl. Each round is a single curl, more curls – more beautiful (more secure) our haircut (our hash function) is. One size doesn’t fit all - http://keccak.noekeon.org/is_sha3_slow.html shows how Keccak solves the problem of universalism (spoiler: it solves it in the same way, just abandons universalism).
...Now, after the short introduction, I’m moving to the other issues.
> [REMOVED]
So far you have broken them for Curl-P-1, Curl-P-2, ..., Curl-P-26, Curl-P-27 functions, I’d like to know your assessment on the min number of rounds which makes your attack infeasible. This may be useful if we decide to unlock “repudiation” before Curl-729-R (with the 3-argument truth table) is ready. By the way, we don’t run a competition similar to http://keccak.noekeon.org/crunchy_contest.html but we will include your findings if we do.
> [REMOVED]
Not entirely sure what you mean, maybe you omitted “cryptographic” in front of “hash function”? In this case you are right, second-preimage resistance is an anti-feature, collision resistance threat is nullified by Coordinator while allows us to easily attack scam-driven copycats. One of such copycat even conducted a sophisticated scam using 99% of IOTA codebase, they got published on https://cointelegraph.com/news/crypto-world-has-been-turning-into-ponzi-scheme-opinion, simulate activity on https://twitter.com/Aidos_kuneen and were even listed on CoinMarketCap.com according to https://mobile.twitter.com/Aidos_kuneen/status/876042685563514880/photo/1. As you are well aware, this space is riddled in scammers, and sometimes scammers can hurt the reputation of honest projects, so having such a simple fix to mitigate them is a bonus.
> [REMOVED]
Could you tell what randomness tests are not passed by Curl-P-27?
> [REMOVED]
Your attack is based on a wrong assumption about IOTA signing scheme. As you know, some signing schemes counteract your scenario by higher level protocols (e.g. RSA), IOTA follows the same route. Let’s apply your findings on the actual scheme, do you have time for that? If yes, then IOTA team will prioritize the documentation.
> [REMOVED]
I claim that IOTA’s signature scheme is EU-CMA secure, to prove that I’ll need some time, but your reply to this email will clarify some issues and make it easier to prove this to you.
> [REMOVED]
Your recommendation makes perfect sense, but before that we need to get a 3rd-party confirmation, this will take some time. I also want to highlight here that we are working with multiple security researchers, cryptographers, mathematicians, students and universities with supercomputers (Imperial College London and St. Petersburg Polytechnic University) to verify our assumptions both in IOTA consensus as well as Curl cryptography. This is not something we take lightly, we are very aware of the security risks involved and therefore approach it in a measured fashion while also pushing for progress.
> [REMOVED]
I see Greek wasn’t your favorite subject in school :), don’t worry, word “prototype” is similar to https://en.wikipedia.org/wiki/Prototype_pattern, not to what you thought about. It is also important to keep in mind that all distributed ledgers are currently in a “prototype phase“.
> [REMOVED]
No, it’s still being tested and these tests by nature will take several months.
> [REMOVED]
We are definitely not going to “[REMOVED]”, we are going to get 3rd-party audit of your claims and then follow your recommendations (if the auditors greenlight them). I want to reassure you that we very much appreciate your input and that we take it seriously, this is why we reached out to you in the first place 3 months ago as well. We are already checking this now with two 3rd parties, including our own inhouse cryptographers and mathematicians. We appreciate your understanding that this will take some time, but your response to this mail will be very helpful in gauging the exact issue and provide ETA, then we can coordinate on the next steps (implementation change, publication etc.)
PS: I'm CCing Paul and Alon, they are devs who will be working on the changes.
Thanks,
Sergey
------------------------------
25th of July
------------------------------
Hi, Ethan
> [REMOVED]
Such a procedure is not done overnight obviously. We have been very responsive on this matter and have internally prioritized it over any other project we have currently on-going. Apart from validating the claims put forth, we are currently assessing how to best integrate a peer-reviewed hash function (4 of our developers are working on this). Here is a proposed timeline:
5th of August – We change the code to replace Curl with Keccak. This means that the Core client, libraries and wallets will be updated by then.
5th of August – 10th of August – Users move their tokens to addresses generated with Keccak
12th of August – the day when we disclose the details
As you know, the worst thing to do at this stage is to release a rushed fix. Which is why we are taking extra effort in ensuring that these drastic changes to the software are done securely, while also taking the time into account (we want this done as soon as possible as well). I hope you understand this and agree with the proposed timeline. We will further communicate any changes, updates and proposals to you for review.
> [REMOVED]
Will CC for sure. Thank you for your offer, for now we are focusing on having the transition to Keccak done in time. Indeed please disclose who you have talked to about this.
> [REMOVED]
Extra checks are done on higher level, we’ll provide documentation after the transition.
> [REMOVED]
There was a site on the Internet with “IOTA seed cracker”. We tested the code provided on that site but it didn’t help to crack a seed, we were unable to reach the author because of absence of his name and contact info. But apart from yourself, we have not heard about any other attacks on Curl. Maybe you can check with the source of the rumor if this is true and we can coordinate this with them as well?
> [REMOVED]
We are not going to increase the number of rounds, we are already replacing Curl with Keccak. We will however continue the development of Curl and are putting up more considerable resources to do so. This will obviously only make it into the protocol after it has been vetted and peer reviewed.
PS: Does the provided timeline fit into your plans?
Sergey
------------------------------
25th of July
------------------------------
Hi, Neha
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/TransactionValidator.java#L72
Here the collision would invalidate a transaction if it adjusted “timestamp” to an invalid value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/TransactionValidator.java#L77
Here the collision would invalidate a transaction if it adjusted “value” to an invalid value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L35
Here the collision would invalidate a bundle if it broke “currentIndex” sequence or “lastIndex” didn’t match the real number of the transactions in the bundle.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L43
Here the collision would invalidate the bundle if it changed the transferred value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L73
Here the collision would invalidate the bundle if it changed “address” field of a multi-sig address (there are no single-sig addresses in official wallet, all addresses are 2-of-2 ones, but custom software might generate single-sig addresses).
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L79
Here the collision would invalidate the bundle if it changed the spending address.
I don’t have the source code of the Coordinator on this notebook, the code is not public, but I can get it tomorrow if it’s important. The Coordinator is used as an extra protection measure. Particularly, it stores all transactions that reach it, this allows us to recover iotas sent to addresses with typo (we detect it because users usually publish the addresses on the tangle before usage), during the latest snapshot we used the Coordinator to restore iotas sent to addresses generated with the previous (now obsolete) address generation scheme.
The above is the higher level protocol that I mentioned in previous letters. We are migrating to Keccak anyway, we’ll provide the proof of infeasibility of your attack after the transition.
Best regards,
Sergey
------------------------------
27th of July
------------------------------
Hi Ethan,
I have already prepared the Coordinator for the transition (the others are still working on back- and front-end parts though). I removed unnecessary security checks and now IOTA can do an even higher CTPS, the unfortunate trade-off is, of course, that now scam copycats can do frivolous ICOs without us being able to prevent it anymore.
Now I’m working on a paper addressing your claims and I have stumbled into something I can’t get. English is my third language, I would appreciate if you explained the following:
On the 22nd of July you wrote:
> [REMOVED]
On the 19th of July you wrote:
> [REMOVED]
And on the 15th of July you wrote:
> [REMOVED]
The first referenced paper states:
> if OWFs exist then EU-CMA signature schemes exist
And you state:
> [REMOVED]
When I combine all this in my head I get the following conclusion: It’s possible to have an EU-CMA signature scheme using a one-way function which is not pseudo-random. Is it a correct claim?
PS: Should we review each other’s paper before the publication?
Sergey
------------------------------
28th of July
------------------------------
Hi,
> [REMOVED]
We are still preparing everything internally. We will give you a headsup on the way this is going to be communicated.
> [REMOVED]
Address generation:
1. A user supplies a seed (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L18)
2. A subseed is generated as Curl(seed+N) where N is the index of an address to generate (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L42)
3. The subseed is absorbed into Curl and 54 243-trit chunks are squeezed out to get the private key (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L46)
4. Each chunk is modified with Chunk=Curl(Chunk) formula 26 times to get the public key fragment (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L72)
5. Curl(First27Chunks) is computed to get Digest0, Curl(Remaining27Chunks) is computed to get Digest1 (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L85)
6. Curl(Digest0 || Digest1) is computed to get the address (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L101)
Bundle hash generation:
A typical bundle consists of 4 transactions going in the following order:
1. Transaction depositing AAA tokens to Alice
2. Transaction spending BBB tokens from Bob
3. Transaction spending 0 tokens from Bob (required for 2-of-2 multisig)
4. Transaction depositing (BBB-AAA) tokens to Bob
Each transaction has the essence which contains:
1. 243 trits of the address
2. 33 trits of the transacted value
3. 48 trits set to 0 (padding, a single non-zero trit would invalidate the transaction)
4. 81 trits of the tag
5. 27 trits of the timestamp
6. 27 trits of the transaction index in the bundle
7. 27 trits of the last transaction (in the bundle) index
The bundle hash is computed by absorbing all the essences in the correct order and squeezing out 243 trits.
Here is an example of such a bundle:
http://iota.tips/search/?kind=transaction&hash=SOKGFAZNVJMEDT9LRODFGSTHDLOIJWSFHYHECIE9YQ9NOEAJGFFDOLKCRVGXLXBU9SMIHKEGDCRZ99999
http://iota.tips/search/?kind=transaction&hash=VETNZBBSTRUNVYVPRRZHCRTLQVG9JOOHQIPYHWWAZXZKILIOQXLLKHOIHZCSMGEPA9DG9ZPJYJLN99999
http://iota.tips/search/?kind=transaction&hash=MFIRBWASZPEZFOKRERDCCVILWIXUHSCZFUJ9NKIXFUQLQWMANHLFNOUOJIIORWMYDHIOOCHVGEUZ99999
http://iota.tips/search/?kind=transaction&hash=DSHH9TNNPJPUYZVRIBLIUTBIWSOFKVCAGYSKYWGA9LVCFULRAO9WKJFKMDFRNETDQBYVZOVCRGFS99999
The raw trytes can be seen by clicking on “Raw trytes (click to show)”
Signing and verification:
Those who spend the tokens authorize the spending by signing the bundle hash. IOTA signing scheme is based on Winternitz signing scheme where checksum is replaced with the following requirement:
If we assign the following numeric values to the trytes:
9 = 0
A = 1
B = 2
...
L = 12
M = 13
N = -13
O = -12
...
Y = -2
Z = -1
then the sum of the trytes of each 27-tryte bundle hash chunk must be equal to 0. This ensures that a change of a single tryte must be compensated by a change of some other tryte in the opposite direction thus requiring to break the one-wayness.
Only 1 of 2^20 bundle hashes satisfies the requirement, this is why the final design contains “bundle nonce” field and some PoW must be done by incrementing the field value and recomputing the bundle hash. The current design doesn’t require that because of a technique added to weaken the signing scheme and disguised as a bundle normalization.
The bundle normalization is done by splitting the bundle hash into 27-tryte chunks and incrementing or decrementing the first few trytes until the sum becomes 0 (https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/hash/ISS.java#L106).
Each 27-tryte bundle hash chunk is signed with a separate transaction. 2-of-2 addresses sign only 2 of 3 chunks thus reducing the collision resistance from 243/2 trits (~384/2 bits) to 162/2 trits (~256/2 bits).
Signing and verification is done as in the classical Winternitz scheme where each 243-trit fragment of the private key signs a single tryte of the bundle hash thus requiring from 0 to 26 Curl invocations. Upon a verification the obtained public key is hashed to get the corresponding digests which are then hashed to be compared to the address.
The hash of the public key fragments (aka digest) is used as a part of a higher level protocol protecting against an adversary successfully breaking Curl-P-27 one-wayness. Some other techniques are used to reduce the security requirements of IOTA signing scheme to one-wayness. All these things are planned to be outlined in a specification we are currently writing. Before revealing this to the general public we are still working on the final design of IOTA, as such I would appreciate it if you would keep this information private and not share it with anyone outside of this group.
Sergey
------------------------------
30th of July
------------------------------
Hi, Ethan
I'm working on the document addressing your claims.
On the 19th of July you wrote:
> [REMOVED]
During the verification of your claim that Curl’s second preimage resistance was broken I faced a little problem. Your example of a collision containing the lyrics of “Push It To The Limit” seems to not satisfy the requirement that (from Wikipedia) “given an input M1 it should be difficult to find different input M2 such that hash(M1) = hash(M2).”
While the lyrics might show that M1 was indeed "given" to you, “RETHT9ES9HRCUITBHVCUHOBPUUUHT9PHLUNWRWGKBKF9YUMDWRXTRVGZHFZEHGATZXZAUPGVEKNMQXFVRXHF9QJQHUTILIPIXUYRVSJEIOJDRIUVWMUABSIKIBAKENE9KVFJUEQUHFRVGELFGJIDXQARWH99XTORHXRETHT9ES9HRCUITBHVCUHOBPUUUHT9PHLUNWRWGKBKF9YUMDWRXTRVGZHFZEHGATZXZAUPGVEKNMQXFVR” in front of it ruins the demonstration. A better proof of your claim is required, could you, please, provide it by finding M2 for “9ABCDEFGHIJKLMNOPQRSTUVWXYZ9ABCDEFGHIJKLMNOPQRSTUVWXYZ9ABCDEFGHIJKLMNOPQRSTUVWXYZ” used as M1?
This is important because googling for “second preimage hash” I found https://security.stackexchange.com/questions/69405/difference-between-second-pre-image-resistance-and-collision-resistance-in-crypt/69409 which among the other things states:
> A second-preimage is also a collision, but we keep the concept distinct because second-preimages are supposed to be substantially harder. If the hash function has an output of n bits and is "perfect" (no known weakness), then the cost of finding a collision is 2^(n/2), while the cost of finding a second-preimage is 2^n (i.e. a lot more).
and
> If the attacker sees a valid, signed message m, then he may want to find a message m' that hashes to the same value. This is the second-preimage model. For the signature system to be robust, the hash function must provide second-preimage resistance. Collision resistance, on the other hand, is not necessary in that case.
and
> In the case of signatures, you need at least second-preimage resistance; but if the context is such that the attacker can obtain signatures on data that he chooses, then collision resistance is also needed.
In IOTA an attacker doesn’t choose the signed message, the chance to guess it correctly is negligible.
Of course, someone might question the validity of the quoted words, but the person who answered that Stack Exchange question has the reputation score above 136000. Judging by your words, namely:
> [REMOVED]
and your references to the opinion of other experts:
> [REMOVED]
I can safely assume that we can rely on the reputation score as on the measurement of the correctness.
All the above makes me wonder if we can use any excuse for the transition other than necessity to use a well-studied and vetted cryptographic primitive.
PS: If it takes too much time for a successful second preimage attack on Curl-P-27, I think, we might go with Curl-P-3 (with only 3 rounds).
Sergey
------------------------------
30th of July
------------------------------
Hi Ethan,
> [REMOVED]
Thank you for pointing me to the exact page. There are 3 definitions, could you tell which one you used in your claim that the second-preimage resistance was broken? This ought to be done to have the same level of formalism.
> [REMOVED]
What does make you say that you can find second-preimages for a non-negligible subset? From one example it makes sense to assume that you can find collisions only for messages beginning with “RETHT9ES9HRCUITBHVCUHOBPUUU”, this would explain why “RETHT9ES9HRCUITBHVCUHOBPUUU” is met again in 162 trytes.
> [REMOVED]
It’s you who already chose the definition, so please provide the definition to let us verify the claim. It’s great if you indeed found an easy way to generate collisions, I hope your approach can be extended to 3-argument Curl thus helping us to unlock the repudiation feature. It’s the only feature which protects end-users against https://en.wikipedia.org/wiki/Key_disclosure_law (where the anonymity feature fails).
> [REMOVED]
I’m pretty sure it’s only for a spherical signature scheme in vacuum. Quick googling returns https://en.wikipedia.org/wiki/Digital_signature_forgery#Existential_forgery_.28existential_unforgeability.2C_EUF.29 where we can see that “Nevertheless, many state-of-art signature algorithms allow existential forgery.” Unless you claim that those words are wrong, I think we are in agreement that your phrase should be read as "[REMOVED]".
Sergey
------------------------------
31st of July
------------------------------
Hi Ethan,
> [REMOVED]
It’s near-impossible to find an expert having experience in application of cryptography to real-world use-cases targeted by IOTA. I questioned the credibility of your statements because I had spotted few signs of a shallow analysis (which you confirmed in the yesterday’s letter by “[REMOVED]”). Your letters titled “Responsible Disclosure: Cryptographic Weaknesses in the Curl hash function in IOTA” sounded pretty official and I thought I had to address everything. Now I see that I was wrong, I’m providing 20 statements which are still not fully processed, please, inform me which statements (the numbers) are still actual, all the others will be considered retracted and won’t be mentioned in the document.
#1: [Jul 15] [REMOVED]
#2: [Jul 15] [REMOVED]
#3: [Jul 15] [REMOVED]
#4: [Jul 15] [REMOVED]
#5: [Jul 15] [REMOVED]
#6: [Jul 15] [REMOVED]
#7: [Jul 15] [REMOVED]
#8: [Jul 15] [REMOVED]
#9: [Jul 15] [REMOVED]
#10: [Jul 15] [REMOVED]
#11: [Jul 19] [REMOVED]
#12: [Jul 19] [REMOVED]
#13: [Jul 19] [REMOVED]
#14: [Jul 19] [REMOVED]
#15: [Jul 22] [REMOVED]
#16: [Jul 22] [REMOVED]
#17: [Jul 22] [REMOVED]
#18: [Jul 23] [REMOVED]
#19: [Jul 30] [REMOVED]
#20: [Jul 30] [REMOVED]
> [REMOVED]
I have a feeling that you refuse to accept existence of cryptographic protocols not mentioned in the textbooks read by you. I can imagine an EU-CMA insecure scheme which is broken only after second-preimage is broken.
> [REMOVED]
No. It’s unclear how many vulnerabilities have been found, we may need to use such naming convention that would allow grouping of several vulnerabilities. We prefer to read the draft of your formal study before assigning the id.
Sergey
------------------------------
31st of July
------------------------------
Hi, Ethan
> [REMOVED]
I analyzed all three definitions. Two of them require the challenge to be selected randomly, the remaining one requires to iterate over all possible challenges. What random selection algorithm do you recommend for the verification of your claim regarding second-preimage resistance of Curl-P-27 being broken?
Sergey
------------------------------
5th of August
------------------------------
Hi Ethan,
> [REMOVED]
David or Dominik will inform you about this in another letter.
> [REMOVED]
The purpose was to get which statements were still actual. You listed none, so I take it as all of them being retracted.
> [REMOVED]
The both are caused by an incorrect usage of Curl (the length should be absorbed first).
> [REMOVED]
This is caused by an incorrect usage of Curl (the length should be absorbed first).
> [REMOVED]
This is a design choice (fine-graininess). 15 rounds of Curl-P have diffusion and S-box non-linearity equivalent to 3 rounds of Keccak. Unlike Keccak, Curl-P uses identical algebraic representations for all trit transitions to make all attacks as simple as possible thus making finding the min number of rounds for a required security level trivial. Curl-P state size being only 72% of Keccak state size makes all analyses even simpler. (Not sure this information is really necessary, you should have noticed all this already, I provide it in case if you just fed a binary version of Curl into existing cryptanalytical software.)
> [REMOVED]
We would like to see a proof of this claim to make sure that you took the higher level protocol into account.
> [REMOVED]
The provided collisions didn’t break EU-CMA security, we are waiting for an algorithm which indeed allows to break that.
> [REMOVED]
You haven't supplied us with the details we requested so therefore they are very confused as well because your finding was a single internal collision while you were mistakenly claiming that it broke second-preimage resistance of Curl-P-27. Your summary of the findings shows that you finally saw that too. We hesitate to inform them about your claim of breaking EU-CMA security of IOTA signatures because we haven’t seen your proof yet and are pretty sure that you are wrong again. Could we see the draft of the paper you are planning to publish?
Sergey
------------------------------
5th of August
------------------------------
Hi,
> [REMOVED]
Here is a list I’m currently working on, is anything missing?
S01: [REMOVED]
Only a single internal collision was demonstrated for a weakened version of Curl-P. It was expectable, especially after a collision on a Keccak equivalent was found (http://keccak.noekeon.org/crunchy_mails/coll-r6-w1600-20170226.txt).
S02: [REMOVED]
Awaiting a proof.
S03: [REMOVED]
Incorrect usage of Curl.
S04: [REMOVED]
Incorrect usage of Curl.
S05: [REMOVED]
Incorrect usage of Curl.
S06: [REMOVED]
Incorrect usage of Curl.
S07: [REMOVED]
The same can be observed for SHA-256:
SHA256(*999999*2d224a9a45e7c8eb185971e53574024801252322d93550f4f5987addf8) = 567be78f9537b18990e32b6164fe06*999999*19b50c7a23556cc6aeb85dbe4778
SHA256(*999999*16e09f623d0cc86cf5eba24e9e1d14f86f12a6a1ea46a4fc984b46c85d) = 699ec27b2b9e4a800841bccd6a4bdd*999999*2a03cceefa6b014dc3d2edd35f60
SHA256(*999999*74d828d6d9d420acd5f1fcf44bff94b45cb6a3fe539fb20e245cbcdcea) = 6b16590e1583885741dbd77bb4d3dd*999999*33d03cb2c20d13ac7cf134e9da76
S08: [REMOVED]
An example wasn’t provided.
S09: [REMOVED]
The abstract states: “We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudo random functions. Compared to previous results, which require a collision resistant hash function, our result provides significantly smaller signatures at the same security level.” Which shows that we can reduce WOTS to different properties. One can reduce WOTS to one-wayness by using the technique from https://pdfs.semanticscholar.org/aaf3/602cdcd40276d2239f22f2eeaf5c2d45446f.pdf.
S10: [REMOVED]
Awaiting for an example of such collision.
S11: [REMOVED]
A single internal collision for a weakened version was demonstrated. No second-preimage attack was demonstrated.
S12: [REMOVED]
According to Wikipedia “A hash function is any function that can be used to map data of arbitrary size to data of fixed size”. Unclear if the difference between “arbitrary size” and “fixed length” is really significant. Confused what this statement could mean.
S13: [REMOVED]
Awaiting a proof with correct usages of Curl.
S14: [REMOVED]
Some information was provided, no requests for more information were received.
S15: [REMOVED]
See S09.
S16: [REMOVED]
An attack was provided later which implies that the signature scheme had been understood. Should, probably, be grouped together with S14.
S17. [REMOVED]
Awaiting for a proof.
S18: [REMOVED]
This is a design choice because making attacks simple allows to pick the min number of rounds for a required security level without worrying that the attacks will become more complex in the future (the evolution of slide attacks is a good example of how this happens).
S19: [REMOVED]
The described attack can’t be applied to IOTA because it doesn’t take into account higher level protocols.
S20: [REMOVED]
These “millions of other colliding messages” cover less than 3^(-333) part of all possible messages hence they can appear with negligible probability even if we generate transactions with 1 billion TPS during 1 billion years.
S21: [REMOVED]
It also depends on higher level protocols, as I have already described in previous correspondence and will elaborate on in the next points.
S22: [REMOVED]
Satoshi’s invention led to a paradigm shift in a lot of areas. Unfortunately, some cryptographers still didn’t fully embrace that and keep sticking to obsolete security assumptions. In our very case it means that window for an attack is very small (maximum - until a transaction is confirmed).
S23: [REMOVED]
You use an unorthodox definition of “non-negligible” because odds to select a message which can be second-preimage attacked are below 1 of 87189642485960958202911070585860771696964072404731750085525219437990967093723439943475549906831683116791055225665627 for a single attempt. (Here I assume that you are planning to use a method based on an internal collision.)
S24: [REMOVED]
All the provided definitions show that second-preimage resistance of Curl hasn’t been broken. Awaiting for more definitions to check.
S25: [REMOVED]
Awaiting the description of a reduction of higher level protocols to collision resistance of Curl.
S26: [REMOVED]
Awaiting the description of the attack.
> [REMOVED]
Looking forward to seeing these attacks. If you are unable to break second-preimage resistance for even 27 rounds then this will mean that the versions with a larger number of rounds are safe for sure, we'll need to reduce the number to 15+ rounds then.
> [REMOVED]
Their time is expensive so we have to do some basic filtering.
> [REMOVED]
What you have provided is not applicable to IOTA’s signature scheme because you ignored higher level protocols.
> [REMOVED]
As you may know, official cooperation requires weeks of paperwork, I think you understand why we can’t reveal their names now.
> [REMOVED]
Well, I would answer honestly but this may be considered as a personal insult, so before that I’d like to get the absolution from you in advance.
> [REMOVED]
It seems to me you didn’t receive my letter to Neha, here is the essential part:
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/TransactionValidator.java#L72
Here the collision would invalidate a transaction if it adjusted “timestamp” to an invalid value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/TransactionValidator.java#L77
Here the collision would invalidate a transaction if it adjusted “value” to an invalid value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L35
Here the collision would invalidate a bundle if it broke “currentIndex” sequence or “lastIndex” didn’t match the real number of the transactions in the bundle.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L43
Here the collision would invalidate the bundle if it changed the transferred value.
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L73
Here the collision would invalidate the bundle if it changed “address” field of a multi-sig address (there are no single-sig addresses in official wallet, all addresses are 2-of-2 ones, but custom software might generate single-sig addresses).
https://github.com/iotaledger/iri/blob/dev/src/main/java/com/iota/iri/BundleValidator.java#L79
Here the collision would invalidate the bundle if it changed the spending address.
I don’t have the source code of the Coordinator on this notebook, the code is not public, but I can get it tomorrow if it’s important. The Coordinator is used as an extra protection measure. Particularly, it stores all transactions that reach it, this allows us to recover iotas sent to addresses with typo (we detect it because users usually publish the addresses on the tangle before usage), during the latest snapshot we used the Coordinator to restore iotas sent to addresses generated with the previous (now obsolete) address generation scheme.
> [REMOVED]
We’d like to see how you avoid generation of collisions which wouldn’t pass the verifications above. Also we’d like to know if the collisions can be generated in real-time to win the propagation race against legitimate transactions.
Sergey
------------------------------
5th of August
------------------------------
Hi, Ethan
> [REMOVED]
You mean “…outside of a spherical signature scheme in vacuum”, don’t you? In our letters we are discussing a concrete signature scheme used in IOTA.
> [REMOVED]
I can explain why these questions were asked, just need the absolution from you (in case if you are offended by my words, lack of English vocabulary makes me sound pretty blunt). Could I have it?
> [REMOVED]
We have taken these issues very seriously from day one, recall that we contacted you, among a lot of other people in the space, to review Curl several months ago. On top of this we are taking concrete action, but when it comes to these claims we want to stick to the Popperian principles of empirical verification rather than rely on belief, which belong in the realm of religion. Again this should simply be interpreted as us taking this very seriously and not wanting to let the devil hide in the details.
Sergey
------------------------------
5th of August
------------------------------
Hi Neha,
don't worry. From our side this is only about facts and making sure we understand each other properly. By "absolution" I am simply attempting to ensure that we indeed stick to the civil discourse, like I mentioned English is my third language, and I was getting worried that my blunt wording might come off as offensive. I believe this entire chain of letters make it evident that we are focused on a civil professional relation.
PS: I even unsure that putting "Hi, <name>" everywhere is suitable, I wish I took normal English classes instead of learning it by reading Java documentation )
Sergey
------------------------------
6th of August
------------------------------
Hi Ethan,
This morning I read an interesting blog post - https://medium.com/@avivzohar/responsible-disclosure-in-cryptocurrencies-74833fc4f211. Unfortunately, it ended abruptly, the author admitted that he didn’t know answers for the questions he had raised. But the post triggered something in my brain and the latter produced one scheme...
“I hope that we find clear and ethical practices that promote more secure and stable systems” - these were the very last words from that blog post. I know what should be added into the box with these practices, it’s FORMALIZATION. Several letters back you and me were arguing if your findings can break the second-preimage resistance, the issue was solved after you provided those three formal definitions. I suggest to apply this method to your claim regarding EU-CMA of IOTA signature scheme.
You gave me this link - http://www.cs.tau.ac.il/~canetti/f08-materials/scribe8.pdf. On page 2 we can see a formal EU-CMA definition (not sure what steps 1-6 you chose for your proof to make sense because IOTA uses WOTS but I assume that step 7 is preserved):
7. Adversary wins if Ver(vk, m*, s*) = accept and m* wasn't previously signed by Sign
This is a step where your attack fails because of the higher level protocols. Particularly, the failure happens in some of those lines of the code that I sent to Neha and later re-sent to you. I’m pretty sure that if I manage to formalize the higher level protocols then we’ll come to a consensus on this matter. I think the consensus is even inevitable. I’d like to know the date when you finalize your publication text, I’ll do my best to have the formalization ready a few days earlier so we could get it included into your paper. There are approximately 5 days left, right?
PS: Aviv’s blog post is really important to the industry of cryptoplatforms, what do you think of the idea to co-author the Postmortem of your responsible disclosure as a continuation of Aviv’s post? We might include several quotes from our letter exchange to let the readers get real feeling of how all this happens.
Sergey
------------------------------
8th of August
------------------------------
Hi Neha,
We can’t get your bundles to be validated successfully. It’s unclear if it’s a bug in the unfinished Go version or something else, we can’t check it right now because of the preparations to the snapshot. The bundles fail in Java version because they don’t form the correct chain of the transactions. The chain starts from the transaction with current index = 0, the trunk transaction should be set to the hash of the following transaction (with current index = 1) and so on. In other words the chain of trunk transaction references must go from the first to the last transaction. The trunk transaction of the very last transaction must reference any transaction with current index = 0, “999…999” is a safe choice. Branch transactions must reference any transactions with current index = 0 too.
Some other issue we noticed: Only one of the bundles can be confirmed (because they share some transactions) but redoing PoW would allow to confirm the both if there was enough money on the spending address balance. In any case the original variant doesn’t allow to trigger a network consensus divergence because of Coordinator.
> 1 [REMOVED]
Yes.
> [REMOVED]
No. The chain of references should be fixed, it’s easy.
> [REMOVED]
What are the mean and the variance of this process? A single number is not very helpful to assess probability of finding conflicting bundles while the window for an attack is open.
Sergey
------------------------------
9th of August
------------------------------
Maybe it's because of deep night here but I see strange things:
1. The both transaction sets seem to be identical
2. The declared bundle hash is PMII9EP9GYVEEOHAYKW9N9WWHABYMK9IDOMEK9LTHNOJAEKACUDEQA9HRXRVNVRSZAWC9HOW9GNMLGXUK while the actual bundle hash is LAFLFDFYMSWKWHBEXYHQ9AFJATGGXR9MKF9ZXHGMZ9TSJJTFHZAAHLNSWBEJFYBADYPHQCCIAUSJQJDME
3. The trunk transaction fields reference transactions generated with minWeightMagnitude 18 (15 would be enough) while actual transaction hashes are generated with MWM ~13
Do you see the same?
The following nodes are on the testnet (mainnet is already upgraded): udp://52.209.174.231:14800, udp://52.211.185.237:14800, udp://52.211.201.40:14800, udp://52.214.207.109:14800 (no mutual tethering). The testnet works with a dummy Coordinator but it's still useful for debugging.
If most of the bundles are generated within 2 minutes then there is no need to measure the variance, we can assume that with 100x more cores you'll get a delay of the same order of magnitude as the propagation time, just need to make sure that your algorithm is parallelizable to such degree. If you can't outpace a legitimate transaction on its way to Coordinator then you have to conduct an eclipse attack first. Ethan has experience in that.
Sergey
------------------------------
9th of August
------------------------------
Hi Neha,
minWeightMagnitude should be set to 15 for the mainnet, it means that the last 15 trits of a transaction hash must be 0s (i.e. last 5 trytes must be 9s). I checked the bundles, the transaction hashes are
9Z9OZXAUCGRRYLLTNLJDALTQAFGRWACPVEIPOAPVV9WAIYZQIWQDUHXWITFLZMJEEQEWYBMBNAWZX9999
UOFMPUZZOVYPAFYLEHKVI9T9WMMVDTE9TLDK9QXPXHY9ZNFNLLCVPRNF9YJTSYPNUEWPZBBKEDJLW9999
BCAXBUCQ9VQNPEFWSXXNHWNFJZYXZDYKTHTWJYEPHQYCZHNZUQQYNXIYAPKDKCICZNOMEYERJSIBY9999
ZKDXLQTHOMLPCCSRGDYBBF9EYMOCIFEIXINNF9RENSEJUPB9UKVHVXPCKQMQUNWUWWWLBPGNJTWTD9999
and
JKFQJQFHCYQFPQTCWOZEWML9HDWYMZVMEKJWRBNDUSSEZIVUF99SLJNZYJX9KRD9TLOI9ALWEOKKW9999
GCSRXZAQMAMNDRYTAHXAUWYIPOCRXJJEDCUULSTREL9IKSP9YEAWONRDVEMXUQVPVQ9WTGPFJXQZZ9999
NMYQYBDWOFMY9CSPQTASNAPRDLZSHRWZNOENWGPLICFPXFKLUCMCMCJ9PFDEKFIBQGFQQCPZDBERZ9999
HGJDFDGW9LXYHL99FOBOSOBUNIVVQKVDQZDJD9DTFGIIAFEOFRXHQ9DNSGJ9FPLVPYE9ZCDJYIVSC9999.
But according to the chain of trunk transaction references they must be
<ANY>
WALUYLVJRDESKVWYOHVTHTB9XSWCEJGPWIWGR9OMYCDLTVQJVUJQVLD9XHHQKSRPRRWGEORCFGGKZ9999
QSAQTCBJDVBZYAUZHQZVJOZUTQUNREOFBDHHJPPXCKWMBONLXVPDWJLMVPMITLFIIHXBDLPVXGGE99999
EMBSIPCFGAVCZPX9BMVCMBXJXYDWJC9BDWZWUMFSCTMACUVHLJQZNAQXYDBPAUXNZIYIAELVFTNRY9999
and
<ANY>
VTJGPEDIALNEUCFULFSIGQMSFBSDYUVKXVYYNCCGHAHPQTVFRYCDI9TORPCBLZC9ULSWVGICTKGYD9999
CYMX9ELXYX9WCTHF9OAMBGEQDARMDGCRIBBEACV9ZLYVZNTUCEZXPHRKQNHQRHB99BSOXZTJLKQXB9999
GOSPZKDMTSBXYDX9PEZWKIPKZWHNZHNLLMJDNDZRTDE9ENTFJFTKVBQAAVCKNPE9YPXQLWWVXNMOC9999.
For a general case if we have a bundle with transaction hashes AAA, BBB, CCC and DDD these transactions should look this way:
Tx AAA references BBB in “trunkTransaction” field
Tx BBB references CCC in “trunkTransaction” field
Tx CCC references DDD in “trunkTransaction” field
Tx DDD references 999…999.
Sergey
------------------------------
10th of August
------------------------------
These bundles look identical to the previous ones.
------------------------------
11th of August
------------------------------
They look identical to the previous ones.
------------------------------
11th of August
------------------------------
Hi Neha,
You provided 3 distinct pairs of bundles, they all failed validation. While the latest pair contains elements which differ in a single tryte they are identical to the pairs from the previous letters. To better explain my thought I prepared a list of the bundle pairs and their corresponding SHA-256 hashes. The hashes were generated by concatenating all the trytes without line breaks and feeding into http://www.xorbin.com/tools/sha256-hash-calculator. The pairs go in chronological order.
BUNDLE0 SHA-256 hash = 9a3288b913d1c7d7e3d7135fe21b4852ec9512bf55ee9047ebaf25349c4101b4
BUNDLE1 SHA-256 hash = 008123e1aace3cf54987ffcbeccad6a62a5fce20a3bc322f4898ce5eec0c9da5
bundle0 = 055b6c0a16492f8f95a12ab737d0253833706e6f7505e4997e0497198adbb0b1
bundle1 = 055b6c0a16492f8f95a12ab737d0253833706e6f7505e4997e0497198adbb0b1
BUNDLE0 = dfbb8f968a739df4fc5bf810defe1536f3dd78b2184aefcca37e1de55aa2af54
BUNDLE1 = e48567d8d743b2f3bb17a0c303dc6a510678cc5f311f2a3f3eeef0fafa585d90
BUNDLE0 = dfbb8f968a739df4fc5bf810defe1536f3dd78b2184aefcca37e1de55aa2af54
BUNDLE1 = e48567d8d743b2f3bb17a0c303dc6a510678cc5f311f2a3f3eeef0fafa585d90
BUNDLEn0 = dfbb8f968a739df4fc5bf810defe1536f3dd78b2184aefcca37e1de55aa2af54
BUNDLEn1 = e48567d8d743b2f3bb17a0c303dc6a510678cc5f311f2a3f3eeef0fafa585d90
Best,
Sergey
------------------------------
11th of August
------------------------------
> [REMOVED]
It was the other way around. We had a chain of trunk transaction references, from it we derived the correct transaction hashes. Nothing was referencing b[0] hence it could have any hash.
> [REMOVED]
The chain of trunk transaction references allows to construct the complete bundles now. I assume that the signatures are correct, they won’t be accepted by Coordinator because it was upgraded but the old version would accept the signatures if the bundles went through anti-DoS routine.
------------------------------
12th of August
------------------------------
Hi Neha,
The below is a quantitative analysis of the demonstrated collisions. It addresses the claims that second-preimages can be found with non-negligible chance and that full state collisions can be found. Precisely speaking, the odds of second-preimage collisions are shown to be below one of a million. Regarding the full state collisions, these collisions are actually just by-products of internal state collisions, we should, probably, agree on formal definitions. Anyway, please, reply if you disagree with anything from the analysis.
I don’t provide a quantitative analysis of Curl-P randomness, it would take a lot of time to show that output changes caused by input changes have binomial distribution, before starting doing it I’d like to know when you are planning to publish Ethan’s findings.
PS: You may find our analysis not sophisticated enough. This is because Curl-P is very simple, solving a system of equations, like Ethan did, is an overkill.
----------
A typical bundle transferring value consists of 4 transactions:
Transaction #0: A deposit of AAA iotas to a recipient's address
Transaction #1: A withdrawal of BBB iotas from a sender's address
Transaction #2: A withdrawal of 0 iotas from the sender's address
Transaction #3: A deposit of (BBB-AAA) iotas to a sender's address (the change)
To assess the percentage of the bundles which can have a single trit changed
in "address" field without having the bundle hash changed as well, we should
pay attention to function F(A, B) which is the main part of the S-box.
The truth table of F() is:
| - | 0 | + | <- A
----------------
- | + | 0 | - |
----------------
0 | + | - | 0 |
----------------
+ | - | + | 0 |
----------------
^ \
| \
B F(A, B)
As we see, if A = -1 then flipping B between -1 and 0 doesn't change F().
The same is true if A = 1 and B = 0 / 1.
Another observation:
By changing a single trit (input) of Curl-P state in the beginning of a round
we can change not more than 2 trits (output) in the end of the round.
The changed output trits depend on the changed input trit and 2 other input
trits which are kept unchanged.
Let's denote the first changed output trit as ALPHA and the second changed
output trit as BETA. Let's denote the changed input trit as XXX and the
remaining 2 input trits as C1 and C2 (they are constants).
We can write that:
ALPHA = F(XXX, C1)
and
BETA = F(C2, XXX)
By using exhaustive search we get that out of all possible combinations of
input values (27) and all possible ways to change XXX (just 2, which are
increment and decrement) we get 54 variants. Only 12 of them lead to a change
of a single output trit, the remaining 42 variants change 2 output trits.
Note that none of the variants allows to keep all output trits unchanged thus
showing that full state collision within a single transform() invocation is
impossible for any number of rounds.
Out of those 12 variants none includes a combination with internal state having
only zeros, this means that the address of Transaction #0 can't be changed.
Transactions #1 and #2 can't be changed without failing the signature
verification in the most cases. Transaction #4 is the only option left.
So, we have 243 trits of an address from Transaction #4. Our goal is to change
a single trit in such a way that an avalanche of the propagated changes affects
only first 243 trits of Curl-P state. This trick allows to generate a collision
because the next absorbed 243 trits (value+timestamp+etc.) will overwrite
the changes anyway. To simplify the analysis we'll do the following trade based
on the observation that a single change in a typical input to Curl-P requires
11-15 rounds for a full diffusion to happen (full diffusion leads
to ~2/3 of the output trits changed):
We reduce the number of rounds from 27 to 13 (=(11+15)/2) and allow a single
output trit change to happen anywhere in the state.
We model our attack as a game consisting of 13 rounds. During a single round we
win with probability 12/54. To win the game, all 13 rounds must be won. The
probability to win the game by changing a single trit from 243 trits of an
address is
(12/54)^13 or ~0.0000000032.
For a single address we can attempt 243 games. The probability of winning in
at least 1 game in this case is
1 - ProbabilityOfLosingInAllGames
or
1 - (1 - (12 / 54) ^ 13) ^ 243
or
~0.00000078
according to www.wolframalpha.com.
The physical meaning of our analysis results is the following:
In a long run we need to process 1 million bundles (or more than 4 million
transactions) to be able to conduct the attack successfully.
----------
Sergey
------------------------------
13th of August
------------------------------
Hi Ethan,
It's been a long time since we've heard anything from you. We thought you were busy with other projects so we analyzed your words related to EU-CMA security of IOTA signature scheme without distracting you. At this point we believe we found a mistake (in your chain of thoughts) which had led to the false claim of the EU-CMA security being broken.
In one of the letters you linked us to http://www.cs.tau.ac.il/~canetti/f08-materials/scribe8.pdf as your choice of EU-CMA definition. We were confused by the fact that you preferred such an oversimplified definition which barely has any use outside of teaching students the basics of Cryptography but later we realized that you provided that definition assuming that we didn’t have cryptographers in our team (thank you for your attempts to make everything as clear to us as possible). We don’t know what definition you prefer to use but it very likely has an equivalent to this fragment:
----------
The verification algorithm is deterministic, that is, for any vk, m, s:
Prob[Ver(vk, m, s) = accept] ? {0, 1}
In other words, it is not possible that the verification algorithm accepts a signature in one run but
rejects it in another.
----------
This is where your and our points of view diverge.
As we wrote, Satoshi’s invention led to a paradigm shift in Cryptography (to some degree, some cryptographers still resist this change refusing to think outside of the box) and this shift forces us to critically review old definitions and to create new ones more suitable to our needs. If you remember, our team is not a big fan of “appeal to authority”, still the following words from “Constructing Cryptographic Definitions” by Phillip Rogaway (http://web.cs.ucdavis.edu/~rogaway/papers/iscisc.pdf) should be repeated:
“I would emphasize that definitions are not written in stone. They emerge, change, and die out far more often than people imagine. They are part of a dialectic within a community.”
This is about our very case.
The signature verification oracle in IOTA is not deterministic. While it doesn’t reject signatures which were accepted, it can reject a signature at one moment and then accept it later. The oracle runs inside Coordinator, its reports are published on the tangle as milestones.
It’s worth noting that the approach of relying on a public ledger for signature verification is not unique to IOTA. The idea was explored by several people, e.g. in “MAVE, NEW LIGHTWEIGHT DIGITAL SIGNATURE PROTOCOLS FOR MASSIVE VERIFICATIONS” by SERGIO DEMIAN LERNER (https://bitslog.files.wordpress.com/2012/04/mave1.pdf) and “Fawkescoin, A cryptocurrency without public-key cryptography” by Joseph Bonneau and Andrew Miller (http://www.jbonneau.com/doc/BM14-SPW-fawkescoin.pdf). We are sure you won’t be questioning the expertise of the mentioned persons, so we ask you to provide a definition of EU-CMA security that could be applied to MAVE and Fawkescoin, after that we’ll use it to verify your claim of IOTA’s EU-CMA security being broken.
PS: If you find time for something outside of the main topic of these letters we would be happy to discuss implications of blockchains on old cryptographic algorithms like the one described in "Constructing digital signatures from one-way functions" by Leslie Lamport.
Sergey
------------------------------
2nd of September
------------------------------
Hi,
> [REMOVED]
One of the bundles is not valid, but it shouldn’t be hard to fix it, just write the correct bundle hash into the corresponding field and redo the PoW.
We see you used tx4 to “decouple” probabilities of finding inner collisions for tx3 and tx5. Does this mean you will retract the claim of Curl-P not being pseudo-random?
> [REMOVED]
Perhaps it will be easier to get the full overview if you can send over the draft you intend to publish. Seeing it in advance would also help to reduce the gap between your and our publications. Any ETA on your publication, by the way?
Sergey
------------------------------
6th of September
------------------------------
Hi,
There are some corrections and suggestions to your paper:
> [REMOVED]
“Curl” should be replaced with “Curl-P”. “Cryptographic” should be removed because Curl-P was designed to allow practical collisions.
> [REMOVED]
A better definition is required, the one you use allows us to break EU-CMA security after just 10 queries to the signing oracle thus making your "attack" lose any value. The definition also doesn’t take into account existence of distributed/decentralized ledgers.
> [REMOVED]
This is a wrong claim because the attacker can’t make a user sign an arbitrary hash.
> [REMOVED]
This reminded me a question I wanted to ask some time ago: How did it happen that a RESPONSIBLE disclosure led to 10s people knowing the details which were supposed to be kept in secret?
> [REMOVED]
Should be changed to “The hash function Curl-P designed by the IOTA project has properties used as protection against scam copycats.”
> [REMOVED]
“Different length” part should be removed, that was incorrect usage of Curl-P caused by absence of complete documentation.
> [REMOVED]
This part should be removed completely, that was incorrect usage of Curl-P caused by absence of complete documentation.
> [REMOVED]
Are you planning to provide them in the nearest future? We’d like to reference them to show that illegal copies of IOTA software are vulnerable to attacks.
> [REMOVED]
This is a wrong statement, some our letters weren’t probably received by your team, could we do some kind of a comparison to find the discrepancies?
> [REMOVED]
“Practical” should be removed, probability of success should be mentioned. If I recall correctly it’s something close to “one in a trillion”.
> [REMOVED]
That example lacks explanation of the networking part of the attack scenario.
> [REMOVED]
“Trading” should be replaced with “deposits/withdrawals”.
> [REMOVED]
That upgrade was about other things. The migration to Kerl was included only because it was planned more than a year ago that we would upgrade it if a cryptographer found signs of the copy-protection mechanism before final Curl is ready.
> [REMOVED]
Should be removed completely because, as we already wrote, the both examples were based on an incorrect usage of Curl-P.
> [REMOVED]
You used an incorrect definition of the EU-CMA security, this should, probably, be emphasized.
> [REMOVED]
The payment was valid only syntaxically, without the networking part of the attack scenario we can’t say that payment was valid.
> [REMOVED]
Should be mentioned that probability of “certain circumstances” is negligible.
The above covers only the first 2 sections. Let’s first come to consensus on them and then move to the remaining part.
PS: I know that David and Dominik also have some commentary that they will include tomorrow, for example, the timeline regarding Ethan being approached by IOTA team in May to do precisely this but declined due to working on Paragon Foundation. For us it is very important that everything is transparent and correct in this.
Sergey
------------------------------
7th of September
------------------------------
> [REMOVED]
Do you mean “Creating a new cryptographic hash function is no trivial undertaking”? It’s about “a function”, not about “the function”. That blogpost is not a formal whitepaper, so replacing “Curl-P” with “Curl” for better readability is justified.
> [REMOVED]
By using that definition one can break EU-CMA security just because IOTA signing is based on one-time signature scheme. Extra construction requiring differential cryptanalysis is not necessary in this case and ought to be excluded according to Occam’s razor principle. You also ignored https://bitslog.files.wordpress.com/2012/04/mave1.pdf and http://www.jbonneau.com/doc/BM14-SPW-fawkescoin.pdf existence which show that your choice of the definition is not very valuable from academic point of view.
> I [REMOVED]
“Practical attack” is something that could be conducted in practice. I don’t know your expertise in Cryptography, so right now it’s hard to me to find a proper example explaining my words.
> [REMOVED]
Numerous people ranging from students we coincidentally work with from a completely different region of the country (with no cryptographic or security accolades) to Paragon Foundation's chats to peripheral researchers in Germany have all independently disclosed this. The first group of people were merely a day after your "responsible disclosure".
> [REMOVED]
Is Aviv Zohar in the list? He somehow knew about that (despite of not being a cryptographer) and I’m pretty sure that happened before the update.
> [REMOVED]
Do you mind if we publish relevant fragments of our letter exchange and just let the readers decide who is right and who is wrong?
> [REMOVED]
Why did you use it that way despite of being informed by us that it must be used in another way?
> [REMOVED]
Why did you use it that way despite of being informed by us that it must be used in another way?
> [REMOVED]
IOTA signing scheme doesn’t rely on the collision resistance.
> [REMOVED]
Great. Please, don’t forget to mention that not all payments can be “attacked” that way but only “one in a trillion” (perhaps you got a more precise assessment).
> [REMOVED]
I believe I already sent you a letter explaining that Coordinator is used as an integral part of the verification oracle. Now you see that your oversimplified definition of EU-CMA security is not only useless but is also harmful because it leads to incorrect interpretations, don’t you?
> [REMOVED]
The statement implies existence of a vulnerability. This is something we disagree with.
> [REMOVED]
They are as useful as examples of what should NOT be done. Would it be hard for you to add that collisions and non-pseudo-randomness are caused by an incorrect usage of Curl-P? What definition of pseudo-randomness did you use?
> [REMOVED]
Do you mind if we publish relevant fragments of our letter exchange and just let the readers decide who is right and who is wrong?
> [REMOVED]
Could you give the order of magnitude of that probability?
Sergey
------------------------------
7th of September
------------------------------
Hi Neha,
I think if your team fixes all issues in Ethan’s findings there will be nothing to publish left. 1 day is not enough for you to extend the “attacks” to something possible in practice, so let’s just continue our dispute in public.
I'm still expecting to get an explanation of how a RESPONSIBLE disclosure led to numerous people outside of our teams knowing the details before the update was even scheduled. This is especially important taking into account your own words:
> [REMOVED]
Personally, I just can't get if you were knowing from the very beginning that the "vulnerability" found by Ethan wasn't critical or that your disclosure wasn't actually responsible given how many independent people, most of whom are not cryptographers or security researchers, has reached out to us about it (and that was only a part of those who had known about Ethan's findings).
Sergey
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment