AdaCore offers their own implementation of MMap() support in GNATColl. However, IMHO their item is an atrocity, in many ways very similarly to their GNAT Sockets pile of garbage (the nearunusability of the latter is what prompted me to write AdaUDP in 2018.) AdaCore's MMap library is not only a behemoth replete with e.g. special cases for MSWin support, but its use is entirely incompatible with safetyrestricted compilation profiles.
Cryostat, on the other hand, does NOT require enabling the use of pointerism, unchecked conversions, the secondary stack, heap allocators, or other bulky and objectionable GNAT features, in the calling program. It does however require finalization to be enabled. This is used to guarantee the safe synctodisk and closing of the backing MMap when the data structure it contains goes out of scope.
Let's proceed to building Cryostat and its included demo program.
You will need:
Add the above vpatch and seal to your Vset, and press to cryostat_genesis.kv.vpatch.
Now compile the included CryoDemo:
cd demo gprbuild
... this will build both the demo and the library.
But do not run it quite yet.
First, let's see what this demo consists of :
with Interfaces; use Interfaces; with ada.text_io; use ada.text_io; with Cryostat; procedure CryoDemo is  Path on disk for the example Cryostat backing file : File_Path : constant String := "cryotest.bin";  Now, let's define an example data structure to place in a Cryostat :  Example payload array's element type: byte. subtype ADatum is Unsigned_8;  Let's make it 512MB  far bigger than a typical stack, to demonstrate  that it will in fact reside in the Cryostat, rather than on the stack : A_MBytes : constant Unsigned_32 := 512;  Example payload: an array. subtype ARange is Unsigned_32 range 0 .. (A_MBytes * 1024 * 1024)  1;  Complete the definition of the payload data structure : type TestArray is array(ARange) of ADatum;  Declare a Cryostat which stores a TestArray : package Cryo is new Cryostat(Form => TestArray, Path => File_Path, Writable => True,  Permit writing Create => True);  Create file if not exists  Handy reference to the payload; no pointerisms needed ! T : TestArray renames Cryo.Item;  T can now be treated as if it lived on the stack : begin Put_Line("T(0) before : " & ADatum'Image(T(0))); Put_Line("T(Last) before : " & ADatum'Image(T(T'Last)));  Increment each of the elements of T : for i in T'Range loop T(i) := T(i) + 1; end loop; Put_Line("T(0) after : " & ADatum'Image(T(0))); Put_Line("T(Last) after : " & ADatum'Image(T(T'Last)));  Optional, finalizer always syncs in this example  Cryo.Sync;  Test of Zap  uncomment and get zeroized payload every time :  Cryo.Zap; Put_Line("OK."); end CryoDemo;
In the demo, we define TestArray  a data structure consisting of a 512 megabyte array, and invoke Cryostat to create a persistent disk store for it. (When the program is first run, the array  instantiated as T  will contain only zeros.) After this, we increment each byte in T, and terminate. When, in the end, T goes out of scope, the finalizer kicks in and properly syncs the payload to disk. Thus, T behaves exactly like a stackallocated variable, with the exception of the fact that its contents are loaded from disk upon its creation (on the second and subsequent runs of the program) and synced to disk upon its destruction (or if Sync were to be invoked.)
Observe that the calling code is not required to perform any filerelated manipulations, or to juggle memory; all of the necessary mechanisms (including error handling) are contained in the Cryostat static library.
When we first execute the demo:
./bin/cryodemo
The following output will appear:
T(0) before : 0 T(Last) before : 0 T(0) after : 1 T(Last) after : 1 OK.
If we run it again, will see the following:
T(0) before : 1 T(Last) before : 1 T(0) after : 2 T(Last) after : 2 OK.
... and so forth. cryotest.bin, the backing file used by the Cryostat in the demo, will consist of 512 megabytes of byte value N, where N is the number of times the demo has executed. For example, after the first run, a hex dump:
hexdump C cryotest.bin
... will yield:
00000000 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 ................ * 20000000
Let's use the traditional strace tool to confirm that the demo behaves as specified:
strace ./bin/cryodemo
The following output will appear:
execve("./bin/cryodemo", ["./bin/cryodemo"], [/* 84 vars */]) = 0 arch_prctl(ARCH_SET_FS, 0x644798) = 0 set_tid_address(0x6447d0) = 3660 rt_sigprocmask(SIG_UNBLOCK, [RT_1 RT_2], NULL, 8) = 0 rt_sigaction(SIGABRT, {0x41c360, [], SA_RESTORERSA_RESTARTSA_NODEFERSA_SIGINFO, 0x42498c}, NULL, 8) = 0 rt_sigaction(SIGFPE, {0x41c360, [], SA_RESTORERSA_RESTARTSA_NODEFERSA_SIGINFO, 0x42498c}, NULL, 8) = 0 rt_sigaction(SIGILL, {0x41c360, [], SA_RESTORERSA_RESTARTSA_NODEFERSA_SIGINFO, 0x42498c}, NULL, 8) = 0 rt_sigaction(SIGBUS, {0x41c360, [], SA_RESTORERSA_RESTARTSA_NODEFERSA_SIGINFO, 0x42498c}, NULL, 8) = 0 sigaltstack({ss_sp=0x644a80, ss_flags=0, ss_size=16384}, NULL) = 0 rt_sigaction(SIGSEGV, {0x41c360, [], SA_RESTORERSA_STACKSA_RESTARTSA_NODEFERSA_SIGINFO, 0x42498c}, NULL, 8) = 0 fstat(2, {st_mode=S_IFCHR0620, st_rdev=makedev(136, 5), ...}) = 0 fstat(0, {st_mode=S_IFCHR0620, st_rdev=makedev(136, 5), ...}) = 0 fstat(1, {st_mode=S_IFCHR0620, st_rdev=makedev(136, 5), ...}) = 0 open("cryotest.bin", O_RDWRO_CREAT, 0666) = 3 ftruncate(3, 536870912) = 0 mmap(NULL, 536870912, PROT_READPROT_WRITE, MAP_SHARED, 3, 0) = 0x7f3bcc575000 writev(1, [{"", 0}, {"T(0) before : 0\n", 21}], 2) = 21 writev(1, [{"", 0}, {"T(Last) before : 0\n", 21}], 2) = 21 writev(1, [{"", 0}, {"T(0) after : 1\n", 21}], 2) = 21 writev(1, [{"", 0}, {"T(Last) after : 1\n", 21}], 2) = 21 writev(1, [{"", 0}, {"OK.\n", 4}], 2) = 4 msync(0x7f3bcc575000, 536870912, MS_SYNC) = 0 munmap(0x7f3bcc575000, 536870912) = 0 close(3) = 0 exit_group(0) = ? +++ exited with 0 +++
There are a few minor knobs that still ought to be added to Cryostat (See README.TXT) but even as it presently stands, it is already sufficient for basic experimentation with clean and compact databases implemented wholly in Ada.
~ To Be Continued. ~
You will need:
Add the above vpatches and seals to your Vset, and press to ffa_ch21a_bis_fix_ch15_gcd.kv.vpatch.
You should end up with the same directory structure as previously.
As of Chapter 21ABis, the versions of Peh and FFA are 250 and 200, respectively.
Now compile Peh:
cd ffacalc gprbuild
But do not run it quite yet.
This Chapter repairs a lethal bug in Chapter 15's "ConstantTime Iterative GCD" algorithm and its implementation.
While working on Ch. 21B, I found that the algorithm used in Ch. 15 is subtly broken on a very small but readilyconstructible subset of its input domain.
My original (unpublished  it was the nominal answer to Ch. 15 Exercise 2...) proof for this algo was fallacious: the calculation as formulated there is superficially correct, but it is not guaranteed to converge in Bitness(A) + Bitness(B) iterations! It is possible to construct pairs of inputs where we end up with an incorrect GCD result, and we will discuss several examples of these in this Chapter.
Cut corners are to be paid for, and I will admit that I would rather pay for this one now, with a Vpatch, a new proof, and an apology to readers, than later  when I put FFA to use in anger, and offer BTC bounties for reporting defects.
Several supposedlyattentive FFA students at various times reported that they have eaten Ch. 15; and at least one of last year's readers even signed it. However, no one reported the defect, which I ended up uncovering while testing the final version of the Ch. 21 ExtendedGCD against Ch. 15's conventional one. The breakage is triggered by a small subset of the possible input space where one or both input FZs to GCD consist almost entirely of ones (i.e. most of the bits are set.) No such case turned up in Ch. 15's randomlygenerated test battery, reinforcing Dijkstra's famous observation that testing can reveal the presence of bugs, but never their absence.
Only proof can be relied on, and the proof had better be correct.
Let's proceed with two concrete examples of inputs which break the Ch. 15 GCD:
Ch. 15 GCD CounterExample No. 1:
This Tape:
.FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBB .FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB G #
... produces 4, when fed into a 256bitFZ run of Peh. But the two numbers are in fact coprime, i.e. their actual GCD is 1.
Click here to see what happens when this Tape runs.
And this Tape:
Ch. 15 GCD CounterExample No. 2:
.FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEB .FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFB G #
... produces 0. Which is not only not the correct answer (again 1; the given numbers are yet again coprime) but is in fact an impossible output of any GCD invocation apart from GCD(0,0)  and is true there only by convention.
Click here to see what happens when this Tape runs.
It is possible to generate further counterexamples, but these are quite enough to demonstrate that the Ch. 15 GCD algorithm does not work as specified.
Now let's review the broken algo:
Chapter 15 Algorithm 3: Broken ConstantTime Iterative GCD.
For FZ integers A and B:
To those who have read Ch. 21A, the defect may be already apparent.
Specifically, when we introduce the Iterate Bitness(A) + Bitness(B)... condition in place of the original nonconstanttime algo's Iterate until B = 0..., steps 8...10 become erroneous, in two respects.
First: in a run which requires exactly Bitness(A) + Bitness(B)  1 iterations to reach the point where A = B, A will end up equal to zero, while the GCD ends up permanently trapped in B.
The root of the problem is that D := A  B becomes the new value of A even when D = 0 (i.e. A and B were equal.) For virtually the whole input space of the algo, this state is temporary: the final GCD will move into A in the very next iteration, and stays there. But if there is no remaining iteration, we end up with the invalid output 0. This is illustrated by Ch. 15 GCD CounterExample No. 2.
Second: the logic of steps 8...10 permits a condition where one or more rounds of the iteration execute without reducing the bitness of either A or B. Enough of these, and the algorithm in fact will not converge. This is illustrated by Ch. 15 GCD CounterExample No. 1.
This time, the problem is that the subtractive step is performed without demanding (as we did starting with Algorithm 2 of Ch. 21A) that both A and B be odd. This can result in an iteration where we in fact get no closer to convergence than we were at the preceding one.
To understand exactly how the Ch.15 algo is subtly broken, let's pretend that we had a 8bit FZ (for didactic purposes strictly; the minimal FZ bitness in FFA is 256); and then construct and illustrate a simple example, where two 8bit quantities  which, theoretically, would be expected to converge to their correct GCD (1) in 16 iterations  fail to do so.
For clarity, this and all subsequent worked examples will show binary representations of A and B.
Sad_Ch15_GCD(0xfb, 0xdb) :
i  A_{i}  B_{i}  A_{Even} :  B_{Even} :  Twos  A ← A  B  B ←?← A  A_{i+1}  B_{i+1} 
A ← A/2  B ← B/2  
1  11111011  11011011  0  100000  11011011  100000  11011011  
2  100000  11011011  10000  0  11001011  10000  11001011  10000  
3  11001011  10000  1000  0  11000011  1000  11000011  1000  
4  11000011  1000  100  0  10111111  100  10111111  100  
5  10111111  100  10  0  10111101  10  10111101  10  
6  10111101  10  1  0  10111100  1  10111100  1  
7  10111100  1  1011110  0  1011101  1  1011101  1  
8  1011101  1  0  1011100  1  1011100  1  
9  1011100  1  101110  0  101101  1  101101  1  
10  101101  1  0  101100  1  101100  1  
11  101100  1  10110  0  10101  1  10101  1  
12  10101  1  0  10100  1  10100  1  
13  10100  1  1010  0  1001  1  1001  1  
14  1001  1  0  1000  1  1000  1  
15  1000  1  100  0  11  1  11  1  
16  11  1  0  10  1  10  1  
17  10  1  1  0  0  1  0  1  
18  0  1  0  0  1  0  1  0  
GCD (Incorrect)  10 
The iterations marked in red, would have been necessary for successful convergence, but never execute; those marked in yellow, do not reduce the bitness of A or B.
The mine is in the fact that, if, at step 8 of the algo:
... it so happens that A is even and B is odd, then D (and consequently the value of A in the next iteration) will be odd. And, if A ≥ B, then B will remain odd in the next iteration; and we will get an iteration cycle where the total bitness of A and B will not decrease, unless A and B happen to be close enough in value for the subtraction in step 8 to decrease it.
Thus we have seen how to construct a condition where carrying out an iteration of Ch. 15's GCD does not reduce the bitness of either A or B.
The interesting bit is that in virtually all possible inputs to GCD, this flaw does not lead to an ultimately incorrect output, because  given sufficient iterations  the correct answer is obtained. But in a very small number of possible input pairs, convergence will not be reached inside 2 × FZ_Bitness iterations.
It appears to be virtually impossible to arrive at the fatal condition by chance.
This kind of thing could be an ideal cryptological boobytrap, if GCD were in fact a key element in any known cryptosystem.
But AFAIK, there is no such cryptosystem. On top of this, from a cryptological point of view, the broken GCD "fails safe", i.e. it can be coaxed into reporting two coprime inputs as being noncoprime, but not the opposite.
And, fortunately (or unfortunately, from the POV of quickly turning up possible defects) FFA does not currently use conventionalGCD inside any other internal algorithm. But let's consider where we have thus far made use of GCD.
Apart from the original Ch.15 tests, the only two places in the series where we have used GCD were the primorial generator demo depicted in Ch. 18C and the prime generator demo which used it.
The primorial generator was unaffected  it apparently produces correct primorials for all valid FZ Widths from 256 to  at least  16384.
The prime generator was also unaffected. In principle, a defective GCD would result, there, in a... slightly slower prime generator, which would attempt a larger number of doomed MillerRabin tests; GCD was used there strictly as speedup prefilter for the intake candidates. But there was no detectable decrease in the number of MRfailed runs after I put the corrected GCD into service.
The Ch. 21A material, interestingly, is also unaffected: the initially nonconstanttime algo from the middle of Ch. 15 is given there as a starting point. And that algo (and all of the subsequent extensions offered) was... correct. Only the constanttime version which was used to actually write the GCD routine in Ch. 15, was not...
The bullet, it could be said, went through the hat, but not the head. Nevertheless, it is a serious defect, and will be corrected in this Chapter. And this time, the full proof of convergence will be given.
Let's rewrite the GCD algo, so that a reduction of the bitness of A, B, or both, is in fact guaranteed at every iteration.
First, we'll write a readabilityoptimized schematic version (expressed with branching logic) of the correct algorithm :
Chapter 21ABis Algorithm 1: Schematic Version of Corrected ConstantTime Iterative GCD :
For FZ integers A and B:
Second, and equivalently, a properly constanttime formulation of the above :
Chapter 21ABis Algorithm 2: Corrected ConstantTime Iterative GCD :
For FZ integers A and B:
Now, let's show, step by step, that Algorithm 1 and Algorithm 2 are arithmeticallyequivalent.
Algorithm 1 Operation  Description  Algorithm 2 Operation 

Initialize the commonpoweroftwo counter to 0. 


Begin a loop with exactly Bitness(A) + Bitness(B)  1 iterations. In FFA, this is definitionally equivalent to 2×FZ_Bitness(N)  1, where N is any of the inputs A, B or the output GCD. 

Start of Iteration  

Compute the absolute value of the difference A  B. In the constanttime Algorithm 2, we save the carry bit to C, which will trigger subsequent operations predicated on the condition A < B. 


Determine whether both A and B are presently odd. 


Assign A to B if A and B were both odd, and A < B. 


Assign D to A if A and B were both odd. 


If both A and B are presently even, increment the common poweroftwo counter. 


If A was found to be even, divide it by two. In Algorithm 2, this is accomplished via a Right Shift by 1. 


If B was found to be even, divide it by two. In Algorithm 2, this is accomplished via a Right Shift by 1. 

End of Iteration  

Normally, B will contain the intermediate result after convergence. However, when calculating GCD(N, 0), where N > 0, it will be found in A. The other variable will always equal 0. Hence, we obtain the final result by performing a bitwiseOR of A, B. It is assigned to A. 


Reintroduce the commonpoweroftwo factor into the intermediate GCD result. If there was none (i.e. one or both inputs were odd to begin with) this will be 2^{0}, i.e. 1, and then has no effect. In Algorithm 2, the multiplication is accomplished via a constanttime Left Shift. 


A now contains the actual GCD of the two input integers. 

GCD is in A. 
If you are entirely satisfied that Algorithm 1 and Algorithm 2 are equivalent in their effects, proceed to the proof below. For clarity, we will base it on Algorithm 1.
First, let's demonstrate that each iteration of the loop preserves the GCD of A, B :
Algorithm 1 Operation  Identity 

A restatement of Euclid's original : GCD(A, B) = GCD(A  B, Min(A, B))
Observe that A  B, the new value of A, is necessarily even, while 

A common factor of 2 will be removed (in steps 11 and 13) from A and B, respectively, and the Twos counter is incremented, so we can multiply 2^{Twos} back in at step 15. GCD(2 × K, 2 × L) = 2 × GCD(K, L) 

A factor of 2 is removed from A. GCD(2 × K, B) = GCD(K, B) 

A factor of 2 is removed from B. GCD(A, 2 × L) = GCD(A, L) 
Now, let's confirm that Algorithm 1 is in fact guaranteed to converge within at most Bitness(A) + Bitness(B)  1 iterations. Let's exhaustively describe the effects of an iteration across the four possible combinations of parities of A and B. (Steps 8 and 9 do not affect the bitness of A or B and will be omitted.)
A is Odd, B is Odd :  
Algorithm 1 Operation  Effect on Bitness of A  Effect on Bitness of B 

No guaranteed effect (but may decrease by 1 or more bits). However, A is now necessarily even.  No guaranteed effect (but may decrease by 1 or more bits). B remains odd. 

As A is guaranteed to have become even in step 7: decreases by 1 bit.  None 

None  None 
Net Effect on Bitnesses:  Decreased by at least 1 bit. May become zero if A and B had been equal.  None; may decrease by 1 or more bits. 
Note that in the convergence case where A=1 B=1, the above parity combination will yield A=0 B=1.
A is Odd, B is Even :  
Algorithm 1 Operation  Effect on Bitness of A  Effect on Bitness of B 

None  None 

None  None 

None  Decreases by 1 bit, unless B = 0. 
Net Effect on Bitnesses:  None  Decreased by 1 bit, unless B = 0. 
A is Even, B is Odd :  
Algorithm 1 Operation  Effect on Bitness of A  Effect on Bitness of B 

None  None 

Decreases by 1 bit, unless A = 0.  None 

None  None 
Net Effect on Bitnesses:  Decreased by 1 bit, unless A = 0.  None 
A is Even, B is Even :  
Algorithm 1 Operation  Effect on Bitness of A  Effect on Bitness of B 

None  None 

Decreases by 1 bit, unless A = 0.  None 

None  Decreases by 1 bit, unless B = 0. 
Net Effect on Bitnesses:  Decreased by 1 bit, unless A = 0.  Decreased by 1 bit, unless B = 0. 
We have shown that each iteration of Algorithm 1 (and, as we have demonstrated their equivalence  Algorithm 2) is guaranteed to reduce the bitness of A, B, or of both A and B, by at least 1 bit  supposing we have not yet reached convergence (i.e. when nothing can be reduced because one of A or B is equal to zero, while the other is odd.)
Therefore, to compute the GCD of A, B where each is of bitness W, we in fact need at most (2 × W)  1 iterations (i.e. to arrive at a GCD of 1, which is of bitness 1.)
Now, let's illustrate two concrete symmetric cases where the maximum permitted number of iterations is in fact required. Each of these pairs of 8bit inputs demands 15 (i.e. ( 2 × 8 )  1) shots for convergence.
GCD(0x80, 0xff) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  10000000  11111111  1000000  0  1000000  11111111  
2  1000000  11111111  100000  0  100000  11111111  
3  100000  11111111  10000  0  10000  11111111  
4  10000  11111111  1000  0  1000  11111111  
5  1000  11111111  100  0  100  11111111  
6  100  11111111  10  0  10  11111111  
7  10  11111111  1  0  1  11111111  
8  1  11111111  11111110  1  1111111  0  1111111  1  
9  1111111  1  1111110  1  111111  0  111111  1  
10  111111  1  111110  1  11111  0  11111  1  
11  11111  1  11110  1  1111  0  1111  1  
12  1111  1  1110  1  111  0  111  1  
13  111  1  110  1  11  0  11  1  
14  11  1  10  1  1  0  1  1  
15  1  1  0  1  0  0  0  1  
GCD  1 
GCD(0xff, 0x80) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  11111111  10000000  1000000  0  11111111  1000000  
2  11111111  1000000  100000  0  11111111  100000  
3  11111111  100000  10000  0  11111111  10000  
4  11111111  10000  1000  0  11111111  1000  
5  11111111  1000  100  0  11111111  100  
6  11111111  100  10  0  11111111  10  
7  11111111  10  1  0  11111111  1  
8  11111111  1  11111110  1  1111111  0  1111111  1  
9  1111111  1  1111110  1  111111  0  111111  1  
10  111111  1  111110  1  11111  0  11111  1  
11  11111  1  11110  1  1111  0  1111  1  
12  1111  1  1110  1  111  0  111  1  
13  111  1  110  1  11  0  11  1  
14  11  1  10  1  1  0  1  1  
15  1  1  0  1  0  0  0  1  
GCD  1 
In both of these examples, Bitness(A)+Bitness(B) is reduced by exactly one in each iteration. And we have already demonstrated that it is impossible to construct a case  aside from the convergence case  where an iteration will reduce this quantity by 0. So in fact these are instances of the "worst case" number of required iterations. (Recall that we are writing a constanttime algo, and the "worst case" number of iterations will always execute.)
Now, let's illustrate what happens in the "degenerate" cases. Starting with GCD(0,0) :
GCD(0, 0) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  0  0  0  0  1  0  0  
2  0  0  0  0  2  0  0  
3  0  0  0  0  3  0  0  
...  0  0  0  0  ...  0  0  
i_{Last}  0  0  0  0  i  0  0  
GCD  0 
For illustrative purposes, three iterations are shown. But the end result, without regard to FZ Width, will always be the same: 0. The Twos counter increases with each additional iteration, as both A and B remain even, but this has no effect on the final output.
Now, let's examine the case GCD(0, N) where N > 0 :
GCD(0, 3) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  0  11  0  0  0  11  
2  0  11  0  0  0  11  
3  0  11  0  0  0  11  
...  0  11  0  0  0  11  
i_{Last}  0  11  0  0  0  11  
GCD  11 
It should be clear at this point that once A has become equal to 0, and B is odd, further iterations have no effect.
Lastly, let's illustrate the "interesting", from the POV of the new algo, degenerate case: GCD(N, 0) where N > 0 :
GCD(3, 0) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  11  0  0  0  11  0  
2  11  0  0  0  11  0  
3  11  0  0  0  11  0  
...  11  0  0  0  11  0  
i_{Last}  11  0  0  0  11  0  
GCD  11 
In this and only this case, the intermediate result will wind up in A rather than in B, given as the subtractive step demands an odd A and odd B, and this never happens in the (N, 0) case.
This is why we need step 14: A := BitwiseOR(A, B).
Let's conclude with a second arbitrary example of the above :
GCD(0x80, 0x0) :
i  A_{i}  B_{i}  A_{Odd} and B_{Odd} :  A_{Even} :  B_{Even} :  Twos  A_{i+1}  B_{i+1}  
A ← A  B  B ← A  A ← A/2  B ← B/2  
1  10000000  0  1000000  0  1  1000000  0  
2  1000000  0  100000  0  2  100000  0  
3  100000  0  10000  0  3  10000  0  
4  10000  0  1000  0  4  1000  0  
5  1000  0  100  0  5  100  0  
6  100  0  10  0  6  10  0  
7  10  0  1  0  7  1  0  
8  1  0  0  7  1  0  
...  1  0  0  7  1  0  
i_{Last}  1  0  0  7  1  0  
GCD  10000000 
At this point, there is not much else left to say about the algorithm per se.
And now, let's see the implementation of the new GCD:
package body FZ_GCD is  Find Greatest Common Divisor (GCD) of X and Y.  Note that by convention, GCD(0, 0) = 0. procedure FZ_Greatest_Common_Divisor(X : in FZ; Y : in FZ; Result : out FZ) is  Widths of X, Y, and Result are equal subtype Width is Word_Index range X'Range;  Working buffers for GCD computation, initially equal to the inputs A : FZ(Width) := X; B : FZ(Width) := Y;  Evenness (negation of lowest bit) of A and B respectively Ae, Be : WBool;  Common powerof2 factor: incremented when Ae and Be are both 1 Twos : Word := 0;  This flag is set when A and B are BOTH ODD OO : WBool;  A  B D : FZ(Width);  This flag is set iff A < B A_lt_B : WBool; begin  To converge, requires number of shots equal to (2 * FZ_Bitness)  1: for i in 1 .. (2 * FZ_Bitness(X))  1 loop  Whether A and B are currently BOTH ODD : OO := FZ_OddP(A) and FZ_OddP(B);  D := A  B FZ_Sub_Abs(X => A, Y => B, Difference => D, Underflow => A_lt_B);  IFF A,B both ODD, and A < B : B' := A ; otherwise no change : FZ_Mux(X => B, Y => A, Result => B, Sel => OO and A_lt_B);  IFF A,B both ODD: A' := A  B ; otherwise no change : FZ_Mux(X => A, Y => D, Result => A, Sel => OO);  If A is now EVEN: A := A >> 1; otherwise no change Ae := 1  FZ_OddP(A); FZ_ShiftRight(A, A, WBit_Index(Ae));  If B is now EVEN: B := B >> 1; otherwise no change Be := 1  FZ_OddP(B); FZ_ShiftRight(B, B, WBit_Index(Be));  If both A and B were even, increment the common poweroftwo Twos := Twos + (Ae and Be); end loop;  Normally, B will contain the GCD, but in the (N,0) N > 0 case  A.  The other variable will always equal 0. Hence, take BitwiseOR(A,B): FZ_Or(X => A, Y => B, Result => A);  Reintroduce the common powerof2 factor stored in 'Twos' FZ_Quiet_ShiftLeft(N => A, ShiftedN => A, Count => Indices(Twos));  Output final result  the GCD. Result := A; end FZ_Greatest_Common_Divisor; end FZ_GCD;
It is very slightly more expensive than the Ch.15 routine: by one MUX (in the loop) and one FZ_Or (outside of the loop.) But this is not a tragedy. Overall it comes out to cost just about the same, after taking into account the reduced (by one) number of iterations.
Unsurprisingly, this corrected variant of GCD passes the Ch.15 test battery; properly swallows both of the Ch.15 counterexamples given earlier; and, of course, the...
Ch. 21ABis GCD "Worst Case" Tape for 256bit FZ :
.8000000000000000000000000000000000000000000000000000000000000000 .FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF G #
... correctly produces the output 1, converging  as predicted  in the 511th and final iteration.
To generate this test for any FZ Width, all you need is:
Ch. 21ABis GCD "Worst Case" Tape for arbitrary FZ Width :
.1 .0~W.1 LS .0~ G #
~ The next Chapter, 21B, will continue the ExtendedGCD sequence of Chapter 21A. ~
]]>"The Star Diaries of Ijon Tichy: The Advantages of a Dragon."
Stanislaw Lem (19212006).
Until now, I've said nothing about my journey to the planet Abrasia, in the Cetus constellation. The civilization there had turned a dragon into the basis of its economy. Not being an economist myself, I, sadly, was not able to make sense of this, even though the Abrasians were more than willing to explain themselves. Perhaps someone wellversed in the particulars of dragons will understand the subject a little better.
The Arecibo radio telescope had been picking up indecipherable signals for quite some time. Jr. Prof. Katzenfenger was the only one able to make headway. He puzzled over the enigma while suffering from a terrible case of the sniffles. His stuffed and dripping nose, ever interfering with his scholarly labours, at a certain point led him to a thought: that the inhabitants of the uncharted planet, unlike us, might be creatures who rely on smell rather than sight.
And indeed their code turned out to consist not of alphabetic letters, but of symbols for various smells. But, truth be told, there were some perplexing passages in Katzenfenger's translation. According to this text, Abrasia is populated not only by intelligent beings, but also by a creature larger than a mountain, uncommonly ravenous and taciturn. The scientists, however, were less surprised by this curio of interstellar zoology, than by the fact that it was specifically the creature's insatiable hunger that brought great returns to the local civilization. It aroused horror, and the more horrible it became, the more they profited from it. I have long had a weakness for all kinds of mysteries, and when I heard about this one, I made up my mind to set out for Abrasia straight away.
Upon arriving, I learned that the Abrasians are entirely humanoid. Except that, where we have ears, they have noses, and viceversa. Like us, they had descended from apes; but while our apes were either narrow or widenosed, their simian ancestors had either a single nose or two. The onenosed had gone extinct from famine. A great many moons orbit their planet, causing frequent and lengthy eclipses. At those times, it becomes pitchdark. Creatures who sought food with the aid of sight could turn up nothing. Relying on smell worked better, but it worked best of all for those who had two widelyspaced noses, and used their sense of smell stereoscopically, just as we make use of our paired eyes and stereophonic hearing.
Later on, the Abrasians had invented artificial lighting, and, even though twinnosedness had ceased to be essential to their survival, the anatomical quirk inherited from their ancestors was here to stay. In the colder times of the year, they wear hats with ear flaps, or, rather, nose flaps, so as not to freeze their noses off. Of course, I may be mistaken. It seemed to me that they were not exactly thrilled with these noses of theirs  reminders of a troublesome past. Their fairer sex hides their noses beneath various decorations, often as large as a dinner plate. But I did not pay much attention to this. Interstellar travel has taught me that anatomical differences tend to be of little significance. The real problems hide far deeper. On Abrasia, that problem turned out to be the local dragon.
On that planet, there is only one very large continent, and on it  something like eighty countries. The continent is surrounded by ocean on all sides. The dragon is located in the far north. Three principalities directly border him  Claustria, Lelipia and Laulalia. After studying satellite photos of the dragon, as well as 1:1000000 scale models of him, I came to the conclusion that he is a quite unpleasant creature. I must say though that he was not the least bit similar to the dragons we know from Earth's stories and legends. Their dragon doesn't have seven heads; he has no head at all, and, it would also appear, no brain. And as for wings, he also hasn't any, and so flight is out of the question. The matter of legs is less clear, but it would appear that the dragon has no limbs of any sort. What he resembles most is an enormous mountain range, copiously slathered with something rather like jelly. The fact that you are beholding a living thing only becomes apparent if you are very patient. He moves uncommonly slowly, as a worm does, and quite often violates the borders of Claustria and Lelipia. This creature devours something like eighteen thousand tonnes of foodstuffs every day. The dragon is fond of grains, porridges made from same, and cereals in general. But he is not a vegetarian. Food is delivered to him by countries which consist in the Union of Economic Cooperation. The bulk of these provisions are carried by rail to special unloading stations, soups and syrups are pumped into the dragon through pipelines, and in the wintertime, when a lack of vitamins is perceived, they airdrop these from speciallyequipped cargo planes. And at no point does anyone need to look for a mouth  the beast is able to grab a meal with any and all parts of its enormous carcass.
When I arrived in Claustria, my first impulse was to ask why they go to such great lengths to feed this monster, instead of letting it perish from hunger. But straight away I learned that I had landed in the midst of a scandal, an "attempted dragon assassination", and promptly shut my mouth. Some Lelipian, dreaming of winning the laurels of a savior, had founded a secret paramilitary organization, with the aim of slaying the insatiable giant. To do this, he proposed to poison the vitamin supplements with a substance which causes unbearable thirst,  so that the beast would take to drinking from the ocean, until it bursts. This reminded me of a wellknown Earth legend about a brave hero who defeated a dragon (whose diet consisted chiefly of fair maidens) by throwing him a sheepskin stuffed with sulfur. But this is where the resemblance between the Earth legend and the Abrasian reality ended.
The local dragon was under the full protection of international law. Not only that: the treaty concerning cooperation with the dragon, signed by the fortynine signatory governments, guaranteed him a steady supply of tasty foodstuffs. The computerized translator, with which I never part on my voyages, allowed me to make a detailed study of their press. The news of the failed assassination had thoroughly dismayed the public.
It demanded severe and exemplary punishment for the failed assassins. This surprised me, because the dragon per se didn't seem to evoke much in the way of sympathy from anyone. Neither the journalists nor the authors of letters to the editor made any secret of the fact that the subject of the conversation is a creature repulsive in the extreme. And so, in the beginning, I had come to think that he, to them, were something like an evil god, a punishment from the heavens, and, as for the sacrifices, they, following some peculiar local custom, spoke of them as "export." You can speak ill of the devil, but you cannot disregard him entirely. At the same time, the devil can tempt people; when you sell him your soul, you can count on a great many earthly pleasures in exchange. The dragon, however, near as I could tell, had made no promises to anyone, and there was absolutely nothing tempting about him. From time to time, he would strain mightily and flood the bordering regions with the byproducts of his digestion, and in ill weather one could feel the stench from fortyodd kilometers away. At the same time, the Abrasians held that their dragon is to be cared for, and that the stink is evidence of indigestion; it means that they must take care to give him medicines which limber up the metabolism. As for the attempt on his life, they said, if, God forbid, it had succeeded, the result would be an unprecedented catastrophe.
I read everything in the newspapers, but none of it shed any light on the question of exactly what kind of catastrophe they had in mind. Exasperated, I took to visiting the local libraries, leafed through encyclopaedias, histories of Abrasia, and even visited the Society of Friendship with the Dragon; but even there, I learned nothing. Except for a few members of the staff, not a soul was there. They offered me a membership if I'd only pony up a year's worth of dues, but this wasn't what I had come for.
The states which bordered the dragon were liberal democracies; there, you were allowed to speak your mind, and after a lengthy search, I was able to find publications which condemned the dragon. But even their authors still held that when dealing with him, one ought to make reasonable compromises. The use of guile or force could have grave consequences. Meanwhile, the wouldbe poisoners cooled their heels in the local jail. They did not plead guilty, despite confessing their intention to kill the dragon. The government press called them irresponsible terrorists, the opposition press  noble fanatics, not quite in their right mind. And one Claustrian illustrated magazine suggested that they might be provocateurs. Behind them, it said, stands the government of a neighbouring country: thinking that the quota on dragon exports established for it by the Union of Economic Cooperation was too stingy, it hoped, via this subterfuge, to get it reconsidered.
I asked the reporter who came to interview me about the dragon. Why, instead of being given a chance to finally put an end to him, were the assassins tossed in the clink? The journalist answered that it would have been a despicable murder. The dragon, by his nature, is kindly, but the severe conditions of life in the polar regions prevent him from expressing his innate kindheartedness. If you had to go hungry constantly, you too would become illtempered, even if you are not a dragon. We must carry on feeding him, and then he will stop creeping southward and become kindlier.
 Why are you so sure of this? I asked.  I've been collecting clippings from your newspapers. Here's a few headlines: "Regions of northern Lelipia and Claustria are getting depopulated. The torrent of refugees continues." Or this: "The dragon has once more swallowed a group of tourists. For how much longer will irresponsible travel agencies peddle such dangerous tours?" Or here's another: "In the past year, the dragon has expanded his footprint by 900000 hectares." What do you say to this?
 That it only confirms what I was saying. We are still underfeeding him! With tourists, yes, there's been some incidents, and quite tragic ones, but one really oughtn't irritate the dragon. He really can't stand tourists, especially the photographing kind. He's allergic to photo flashes. What would you have him do? Remember, he lives in total darkness threequarters of the year... And I'll say, just the production of highcalorie dragon fodder gives us 14600 employment positions. Yes, some handful of tourists perished, but how many more people would perish of hunger, if they were to lose their jobs?
 Just a minute, just a minute,  I interrupted him.  You bring the dragon foodstuffs, and this surely costs money. Who pays for it?
 Our parliaments pass laws which bestow export credits...
 So, it is your taxpayers who pay for the dragon's upkeep ?
 In some sense, yes, but these outlays bring returns.
 Wouldn't it be more profitable to put an end to the dragon?
 What you are saying is monstrous. In the last thirty years, over forty billion have been invested in industries connected with dragonfeeding...
 Maybe it would be better to spend these sums on yourselves?
 You are repeating the arguments of our most reactionary conservatives! the reporter exclaimed with irritation.  They are inciting murder! They want to turn the dragon into tinned meat! Life is sacred. No one ought to be killed.
Seeing that our conversation was leading nowhere, I parted ways with the journalist. After a bit of thinking, I went off to the Archive of Print and Ancient Documents, so that, after digging through dusty newsprint clippings, I could find out just where this dragon had come from. It took a great deal of effort, but I was able to discover something quite intriguing.
Half a century ago, when the dragon took up a mere two million hectares, no one had taken him seriously. I ran across many articles which proposed to uproot the dragon from the ground, or to flood him with water through specially built canals, so that he might freeze over in wintertime; but the economists explained that this operation would be quite expensive. But when the dragon, who in those days was still subsisting solely on lichens and mosses, doubled in size, and the inhabitants of neighbouring regions began to complain of the unbearable stench (especially in the spring and summer, when the warm breezes start to blow), charitable organizations offered to sprinkle the dragon with perfume; and when this didn't help, they took up collections of baked goods for him. At first, their project was laughed at, but with time it really took off. In newspaper clippings from later times, there was no longer any talk of liquidating the dragon, but instead more and more talk of the profits that are to be gained from bringing him aid. And so, I was indeed able to learn some things, but I decided that this was not enough, and set off to the university, to visit the Department of General and Applied Draconistics. Its dean received me quite courteously.
 Your questions are anachronistic to the utmost degree,  he answered with a condescending smile after hearing me out.  The dragon is a part of our objective reality, an inseparable, and, in a certain sense, central part, and therefore it must be studied as an international problem of the greatest importance.
 Can you be more specific?  I asked.  Where did he come from in the first place, this dragon?
 Oh, who knows,  the draconologist answered phlegmatically.  Archaeology, predraconistics, and the genetics of dragons are not in my circle of interests. I do not study draconogenesis. While he was still small, he did not present a serious problem. That is a general rule, esteemed foreigner.
 I was told that he descended from mutant snails.
 I doubt it. At the same time, it isn't important just where he came from, given that he already exists, and not merely exists! If he were to disappear, it would be a catastrophe. And we would not likely recover from it.
 Really? Why is that?
 Automation led us to unemployment. Including among the scientific intelligentsia.
 And what, the dragon helped?
 Of course. We had enormous surpluses of foodstuffs, mountains of pasta, lakes of vegetable oil, and the overproduction of baked goods was a genuine calamity. Now we export these surpluses up north, and, remember, they also have to be refined. He won't scarf up just anything.
 The dragon?
 Well yes. To develop an optimal programme for his nourishment, we had to create a system of scientific research centres, such as the Chief Institute of Dragonhusbandry and the Higher School of Dragon Hygiene; in each university, there is at least one draconistics department. Special enterprises produce new types of fodder and nutritional supplements. The propaganda ministry created special information networks, so as to explain to society just how profitable trade with the dragon can be.
 Trade? So he sends you something? I can hardly believe this!
 He sends, of course. Chief of all, the socalled dracoline. It's a secretion of his.
 That shiny slime? I saw it in the photos. What's it good for?
 When it congeals  for plasticine, for children in kindergartens. But of course there are a few problems. It is hard to get rid of the smell.
 It stinks?
 In the usual sense  very much. To get the smell out, they add special deodorants. For the time being, dragon plasticine costs eight times more than the ordinary kind.
 Professor, what do you think of the attempt on the dragon's life? The scientist scratched his ear, which hung above his lips.
 If it had succeeded, then, first of all, we would have an epidemic on our hands. Just try and imagine the vapours that would emanate from such an enormous cadaver? And, second, the banks would go broke. The total destruction of our monetary system. To make a long story short, catastrophe, esteemed foreigner. A terrible catastrophe.
 But his presence makes itself known, doesn't it? To tell the truth, it's extremely unpleasant, isn't it?
 What do you mean, unpleasant?  he said with a profound philosophical calm. The postdraconic crisis would be far more unpleasant still! Remember, please, that we not only feed him, but conduct extranutritional work with him. We try to soften his temper, keep it within certain boundaries. This  is our program of socalled domestication, or appeasement. Lately he is being given large quantities of sweets. He likes sweets.
 Somehow I doubt that his temperament will get sweeter from this,  I blurted out.
 But at the same time, the export of baked goods has quadrupled. And you mustn't forget about the work of the CMDR.
 What's that?
 Committee for the Mitigation of Draconic Repercussions. It provides employment for many university and college graduates. The dragon has to be studied, investigated, and, from time to time  healed; previously we had a surplus of medics, but now every young doctor is assured of finding work.
 Well then, I said without much conviction.  But all of this is exported philanthropy. Why don't you start doing philanthropy right here, among yourselves?
 How do you mean?
 Well... you spend mountains of money on that dragon!
 So, what  should we be handing it out to citizens just like that? This runs against the very basics of any school of economics! You, I see, are a total ignoramus in economics. Credits, which back draconic export, warm up the economy. Thanks to them, the exchange of goods and services grows...
 But the dragon grows too,  I interrupted him.  The more you feed him, the bigger he gets, and the bigger he gets, the bigger is his appetite. Where's the sense in this? Don't you know that in the end, he'll drive you into penury and eat you up?
 Nonsense!  the professor fumed. Banks add the credits to their portfolios!
 So, they're, what, bonds? And he'll repay them in what? In his plasticine?
 Don't take things so literally. If it weren't for the dragon, for whom would we then build the pipelines through which we pump flour extract? Don't you see, that's ironworks, pipe factories, welding robots, networks of transport, and so forth. The dragon has real needs. See, now you understand? Production has to work for somebody! Industrialists would not produce anything, if the finished product had to be thrown into the sea. A real consumer, on the other hand, that's something entirely different. The dragon  is a gigantic, amazingly capacious foreign market, with a colossal demand pressure...
 I don't doubt it,  I noted, seeing that this chat is leading nowhere.
 And so, have I convinced you?
 No.
 This is because you hail from a civilization that is so utterly different from our own. At any rate, the dragon has long ago stopped being a mere importer of our production.
 So what has he turned into?
 An idea. A historic necessity. Our state interest. The mightiest factor which justifies our united efforts. Try to look at this business through exactly that lens, and you will see what fundamental problems can be discovered in what is, to be fair, a quite revolting creature, if it grows to a planetary scale.
September 1983
P.S. They say that the dragon has broken up into a multitude of little ones, but their appetite is anything but weaker.
You will need:
ACHTUNG: The subject of the Chapter 21 originally promised at the conclusion of Chapter 20 is postponed. That topic will be revisited later!
On account of the substantial heft of Chapter 21, I have cut it into three parts: 21A, 21B, and 21C. You are presently reading 21A, which consists strictly of an introduction to FFA's variant of the Extended GCD algorithm and its proof of correctness.
Recall that in Chapter 18C, we demonstrated the use of FFA and Peh to generate cryptographic primes, of the kind which are used in e.g. the RSA public key cryptosystem.
We also performed GPGRSA signature verifications using Peh, and learned how to adapt traditional GPG public keys for use with it.
But why have we not yet seen how to generate a new RSA keypair ?
The stillmissing mathematical ingredient is an operation known as the modular multiplicative inverse.
This is a process where, provided an integer N and a nonzero modulus M, we find (or fail to find  its existence depends on the coprimality of the inputs) the integer X where:
NX ≡ 1 (mod M).
... or, in FFAistic terminology, where FFA_FZ_Modular_Multiply(N, X, M) would equal 1.
In RSA, this operation is used when deriving a public key corresponding to a given private key. (It is also used when ensuring that a selected pair of secret primes has certain properties.) The exact details of this process will be the subject of Chapter 22.
Typically, the modular multiplicative inverse is obtained via the Extended GCD algorithm. It differs from the ordinary GCD of Chapter 15 in that, when given inputs X, Y, it returns, in addition to their greatest common divisor, the Bezout coefficients: a pair of integers P, Q which satisfy the equation:
PX + QY = GCD(X,Y)
The modular multiplicative inverse of N modulo M exists strictly when GCD(N, M) = 1.
Therefore, in that case,
NX + MY = GCD(N,M) = 1
... which can be written as:
NX  1 = (Y)M
i.e.:
NX ≡ 1 (mod M).
The reader has probably already guessed that the typical heathen formulation of Extended GCD, i.e. one which uses a series of divisions with remainder, is unsuitable as a starting point for writing FFA's Extended GCD  for the same reason as why the analogous Euclidean GCD was unsuitable (and rejected) for writing the ordinary GCD routine of Chapter 15. Division is the most expensive arithmetical operation, and it would have to be repeated a large number of times, while discarding many of the outputs (a la the oldstyle nonBarrettian modular exponentiation of Chapter 6.)
Fortunately, it is possible to write a much more efficient constantspacetime algorithm for Extended GCD, rather closely similar to the one we used for ordinary GCD.
Our algorithm of choice for a properly FFAtronic Extended GCD will be very similar to the Binary Extended GCD pictured in Vanstone's Applied Cryptography.
However, in addition to constantspacetime operation, our algorithm will differ from the traditional one by avoiding any internal use of signed arithmetic, and will produce outputs having the form :
PX  QY = GCD(X,Y)
Because the Extended GCD algorithm chosen for FFA is not wholly identical to any previouslypublished one, on account of having to satisfy the familiar requirements, I decided that a full proof of correctness for it ought to be given prior to publishing Chapter 21's Vpatch.
Let's proceed with deriving this algorithm and its correctness proof.
As a starting point, let's take the traditional nonconstanttime Stein's GCD given as Algorithm 2 at the start of Chapter 15:
Iterative QuasiStein GCD from Chapter 15.
For integers A ≥ 0 and B ≥ 0:
Now, suppose that Twos  the common poweroftwo factor of A and B  were known at the start of the algorithm, and that it were to be divided out of both numbers prior to entering the loop. The rewritten algorithm could look like this:
Algorithm 1: QuasiStein GCD with "Twos" Removal.
For integers X ≥ 0 and Y ≥ 0:
Suppose that we have a correct definition of Find_Common_Power_Of_Two_Factor. (For now, let's merely suppose.)
Algorithm 1 will do precisely the same job as Iterative QuasiStein GCD. However, it has one new property  presently useless  of which we will later take advantage: because 2^{Twos} (which could be 1, i.e. 2^{0}, if either input were odd to begin with) was divided out of X and Y in steps 2 and 3  upon entering the loop at step 7, at least one of X, Y can be now relied upon to be odd. (If X and Y were both equal to zero at the start, step 7 is never reached, and the algorithm terminates immediately.)
Now we want to extend Algorithm 1 to carry out Extended GCD.
First, let's introduce two pairs of additional variables. Each pair will represent the coefficients in an equation which states A or B in terms of multiples of X and Y: A_{x}, A_{y} for A; and B_{x}, B_{y} for B :
A  =  A_{x}X  A_{y}Y 
B  =  B_{y}Y  B_{x}X 
Elementarily, if we were to give these coefficients the following values:
A_{x}  :=  1  A_{y}  :=  0  
B_{x}  :=  0  B_{y}  :=  1 
... both equations will hold true, regardless of the particular values of A and B.
Of course, this alone does not give us anything useful.
However, if we assign these basecase values to the coefficients at the start of the algorithm, and correctly adjust them every time we make a change to A or B, the equations will likewise hold true after the completion of the loop, and, in the end, it will necessarily be true that:
A_{x}X  A_{y}Y  =  GCD(X,Y) 
... and, importantly, also true that :
A_{x}X  A_{y}Y  =  GCD(X,Y) 
When A or B is divided by two, we will divide each of the coefficients in the corresponding pair (A_{x}, A_{y} or B_{x}, B_{y} respectively) likewise by two. When A and B switch roles, the pairs of coefficients will likewise switch roles. When A is subtracted from B, or viceversa, the respective pairs will be likewise subtracted from one another, keeping the invariant.
In order to keep the invariant at all times, it will be necessary to apply certain transformations. The mechanics of these transformations will account for most of the moving parts of our Extended GCD algorithm.
To make it clear where we will want to put the new variables and the mechanisms for keeping the invariant, let's rewrite Algorithm 1 in traditional branchedlogic form:
Algorithm 2. QuasiStein with Branched Logic.
Now, let's work out what must be done to the coefficients A_{x}, A_{y} and B_{x}, B_{y} when we carry out the operations A  B and B  A (steps 9 and 11, respectively) to keep the invariant :
A  B  =  (A_{x}X  A_{y}Y)    (B_{y}Y  B_{x}X) 
=  (A_{x} + B_{x})X    (A_{y} + B_{y})Y  
B  A  =  (B_{y}Y  B_{x}X)    (A_{x}X  A_{y}Y) 
=  (A_{y} + B_{y})Y    (A_{x} + B_{x})X 
If we write these simplified expressions sidebyside with the original invariant equations :
A  =  A_{x}X    A_{y}Y 
A  B  =  (A_{x} + B_{x})X    (A_{y} + B_{y})Y 
B  =  B_{y}Y    B_{x}X 
B  A  =  (A_{y} + B_{y})Y    (A_{x} + B_{x})X 
... it becomes quite obvious. In both the A  B and the B  A cases, we only need to compute the sums A_{x} + B_{x} and A_{y} + B_{y} ; the only difference will consist of whether they must be assigned, respectively, to A_{x}, A_{y} or B_{x}, B_{y}.
Now, suppose we introduce these new parts into Algorithm 2, and end up with :
Algorithm 3. A naive attempt at Extended GCD.
Unfortunately, Algorithm 3 will not work; the equation at step 30 will not hold true. And the attentive reader probably knows why.
For the inattentive: the erroneous logic is marked in red.
The problem is that we have no guarantee that A_{x}, A_{y} or B_{x}, B_{y} are in fact even at steps 22,23 and 26,27 where they are being divided by two. An entire bit of information "walks away into /dev/null" every time one of these coefficients turns out to have been odd. And then, the invariant no longer holds. Instead of correct coefficients A_{x}, A_{y} at step 30, we will end up with rubbish.
The pill needed here is known (according to D. Knuth) as Penk's Method. (I have not succeeded in discovering who, when, or where, Penk was. Do you know? Tell me! A reader found the works of Penk.)
This method, as it happens, is not complicated. And it looks like this:
Algorithm 4. Extended GCD with Penk's Method.
Of course, for our purposes, it does not suffice to merely know what it looks like; we must also understand precisely why it is guaranteed to work !
At this point, the reader should be satisfied with the logic of the OddAOddB case; and therefore knows that the invariant in fact holds when we first reach either of the two places where Penk's Method is applied.
Let's start by proving that what Penk does to A_{x}, A_{y} (in steps 22..24) and B_{x}, B_{y} (in steps 29..31) does not itself break the invariant.
Review the invariant again:
A  =  A_{x}X    A_{y}Y 
B  =  B_{y}Y    B_{x}X 
... and see that in the first case,
... the A invariant equation holds :
(A_{x} + Y)X    (A_{y} + X)Y  
=  (A_{x}X + XY)    (A_{y}Y + XY)  
=  A_{x}X    A_{y}Y  =  A 
And in the second case,
... the B invariant equation holds :
(B_{y} + X)Y    (B_{x} + Y)X  
=  (B_{y}Y + XY)    (B_{x}X + XY)  
=  B_{y}Y    B_{x}X  =  B 
The added X and Y terms simply cancel out in both cases.
However, it remains to be proven that Penk's Method actually resolves our problem, rather than merely avoids creating a new one; i.e. that these operations in fact guarantee the evenness of the coefficients prior to their division by two.
Let's demonstrate this for the EvenA case first; later, it will become apparent that the same approach works likewise in the EvenB case.
At the start of step 21, we know that A is even :
And, at all times, we know that X and Y cannot be even simultaneously (because we have divided out 2^{Twos} from each of them.)
On top of all of this: since we graduated from kindergarten, we also know about the following properties of arithmetic operations upon odd and even numbers:
Parity of Addition and Subtraction:
+/  Odd  Even 
Odd  Even  Odd 
Even  Odd  Even 
Parity of Multiplication:
×  Odd  Even 
Odd  Odd  Even 
Even  Even  Even 
So, what then do we know about the parities of the working variables at step 21? Let's illustrate all possible combinations of parities, using the above colour scheme; and observe that we only need the A invariant equation for this. As it happens, there are exactly two possible combinations of parities for its terms :
A  =  A_{x}X    A_{y}Y 
A  =  A_{x}X    A_{y}Y 
Elementarily, given that A is even at this step, both terms of its invariant equation must have the same parity.
Let's produce a table where all possible combinations of parities of the variables having an unknown parity (A_{x}, A_{y}) are organized by the known parities of the variables having a known parity (X, Y, A).
In yellow, we will mark variables which could be either odd or even in a particular combination; in red, those which are known to be odd; in green, those known to be even; and in grey, those for which neither of the possible parities would make the known parities of X, Y, and A in that combination possible, and therefore imply a logical impossibility :
Odd(X)  Even(X)  Odd(X)  
Odd(Y)  Odd(Y)  Even(Y)  
A  =  A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y  
A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y 
Now, let's remove the two impossible entries from the above table:
Odd(X)  Even(X)  Odd(X)  
Odd(Y)  Odd(Y)  Even(Y)  
A  =  A_{x}  X    A_{y}  Y  A would be Odd!  A would be Odd!  
A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y 
And now suppose that we have reached step 23. Therefore we already know that at least one of A_{x} and A_{y} is odd. Let's remove from the table the only entry where this is not true; and, finally, in all of the remaining entries, indicate the deduced parity of the variables whose parity was previously indeterminate :
Odd(X)  Even(X)  Odd(X)  
Odd(Y)  Odd(Y)  Even(Y)  
A  =  A_{x}  X    A_{y}  Y  A would be Odd!  A would be Odd!  
A_{x},A_{y} are Even  A_{x}  X    A_{y}  Y  A_{x}  X    A_{y}  Y 
All of the parities are now determined.
We are left with only three possible combinations of parities of the terms of A which could exist at step 23. So let's list what happens to each of them when Penk's Method is applied, i.e. Y is added to A_{x} and X is added to A_{y} :
Possible Parity Combos  Parity of A_{x} + Y ?  Parity of A_{y} + X ?  
A  =  A_{x}  X    A_{y}  Y  A_{x}  +  Y  =  Even  A_{y}  +  X  =  Even  
A_{x}  X    A_{y}  Y  A_{x}  +  Y  =  Even  A_{y}  +  X  =  Even  
A_{x}  X    A_{y}  Y  A_{x}  +  Y  =  Even  A_{y}  +  X  =  Even 
In the EvenA case, Penk's Method works, QED. It is difficult to think of how this could be made more obvious. The parities on both sides of the + sign always match, and so we have forced both A_{x} and A_{y} to be even, without breaking the invariant equation for A.
And now, how about the EvenB case ? Suppose that we have reached step 30. Let's do the same thing as we have done for the EvenA case:
B  =  B_{y}Y    B_{x}X 
B  =  B_{y}Y    B_{x}X 
... does this remind you of anything you've seen before?
Instead of telling the nearlysame and rather tedious story twice, let's leave step 30 as an...
Chapter 21A, Exercise #1: Prove that Penk's Method is correct in the EvenB case.
Chapter 21A, Exercise #2: Why are simultaneouslyeven X and Y forbidden in this algorithm?
Now, we are ready to rewrite Algorithm 4, this time grouping any repeating terms together, as follows:
Algorithm 5. Extended GCD with Penk's Method and Unified Terms.
For integers X ≥ 1 and Y ≥ 0:
At this point, we have something which closely resembles commonlyused traditional variants of the Binary Extended GCD algorithm. With the notunimportant difference that all of the working variables stay positive throughout. This way, we will avoid the need to introduce signed arithmetic into the internals of FFA.
Should we proceed with rewriting Algorithm 5 using constanttime operations? ...or is something important still missing? What is it ?
Chapter 21A, Exercise #3: What, other than the use of FFAstyle constanttime arithmetic, is missing in Algorithm 5 ?
~To be continued in Chapter 21B.~
]]>
Good afternoon there,
I became even more concerned about our future after reading your http://www.loperos.org/?p=1299 page.
Our government continues to use surveillance technologies to collect sensitive data and then there’s a real risk that this data can be leaked. Did you know that over 7.9 billion private records have been exposed last year alone?
Your article really stands out, it’s exceptionally well written and interesting to read (yet, disturbing). I’ve also recently had a chance to contribute to an article that explains government intelligence alliances ([spam link snipped]).
I’m trying to spread the word about the increasing invasion of privacy and would be flattered if you would consider linking to my post.
P.S. What do you think about the article?
Regards,
[spammer]
My response, posted in the (perhaps naive) hope that the message was in fact meatgenerated, and that I might do my part to discourage the further emission of any similar nonsense from other, similar meat:
Dear [spammer],
In my view, all, without exception, public VPN services are guilty until proven innocent (and such proof is  elementarily  impossible!) of being honeypots. On top of this, their practical and immediate effect on a user's privacy is entirely opposite from what the vendors claim it to be.
The traffic of any particular consumer ISP consists primarily of porn and "lolcats", and therefore creates an expensive headache for the snoops: they are stuck buffering the garbage, and sifting for "interesting" material  a job which, despite $trillions invested in "intelligent" filters, remains essentially musclepowered. And no amount of human snoop muscle can ever keep up with the input.
To help in separating the lolcatandporn packets from the "interesting" material, the snoops set up honeypot services, e.g. the TOR network and various VPNs, so that "interesting" people can helpfully expose themselves as such.
I want no part whatsoever in perpetrating this (or any other) scam. It is an insult to my intelligence, and to that of my readers, to ask for any such thing.
And so, any text presenting VPNism in any light different from the above, I regard as a work of NSA propaganda. And I see no reason to recommend any such piece to my readers, or for that matter to anyone else.
If you are an actual human being, as opposed to a spy agency shill (unfortunately, the item you endorsed does not enable me to make this distinction!) I recommend to think about what I have said, and reconsider the wisdom of *deliberately paying* an organization whose primary purpose is to assist snoops in their work of ferreting out "interesting" victims whose packets may be worth the disk space to log.
But if in fact you are a shill, I expect you will ignore this message.
P.S. I will be making this exchange public. Minus the spam link.
Yours,
S
]]>
I have been using Gentoo for nearly all Linuxrelated work since 2007. It was "the lesser of evils": to my knowledge, no other Linux variant ever offered a comparable level of rodenticidal control, while at the same time providing an adequate means of automatically cutting through "dependency hell".
This uniqueness made Gentoo a target for concerted attack by the Enemy, on a variety of fronts. Through creeping Poetteringism, the cancerous blight of GCC 5+, and the eventual decay  fostered by corrupt maintainers  of the portage system into unusability, it became nearly impossible to set up a hygienic Gentoo system from scratch.
The minute you try emerge sync, you will be forcefed a boiling ocean of liquid shit under the name of "progress". And the machine will have to be scrubbed and reOSed, if you wish to continue with civilized life on it.
Eventually I resorted to creating "canned" Gentoos using bitwise copies of old installations. (E.g. this system for ARM64, and this one for AMD64, are provided for my ISP service customers.)
One obvious problem with the "canned" approach is the decay of the official Gentoo distfiles mirrors. At one time these operated purely by accretion, i.e. added new packages while preserving the old. At a certain point this changed, and the servants of "progress" began to sabotage the ecosystem by deliberately removing "ungodly" packages. Mirror operators which refused to participate in this "cultural revolution" were delisted and you will not find them via the Gentoo WWW  even supposing their mirrors are still standing (many have simply given up.)
Until and unless a cultural "reset" takes place, and something like what Gentoo tried to be in 2007 again exists as a living entity, I intend to continue the use of my handcurated "fossilized" variants. And to make this easier for others, I have put together a public distfiles mirror:
... using the contents of my backup tapes from a multitude of Gentoo systems I have operated.
Gentoo users may add this URL to their GENTOO_MIRRORS list in /etc/portage/make.conf; or download any necessary tarballs into their local /usr/portage/distfiles by hand; this is a matter of taste.
WARNING: this repository is offered with no warranty or endorsement of any kind, and certainly contains software with dangerous flaws. It does not necessarily reflect the current configuration of any of the Gentoo machines I presently use (although packages from both the RK and the "Dulap" variant's default distfiles directories are all present.) I have trimmed some  but by no means all!  of the obvious garbage. It goes without saying that I am not responsible for the contents of the tarballs, or even their integrity. Please do not use tarballs for which you do not have an authoritative signature for safetycritical work! From this  or any other public repository.
Any and all questions regarding the above  I prefer to answer here.
People from my L1 WoT who wish to contribute specimens to this collection, are invited to contact me.
At this time, the Gentoo collection weighs ~16GB.
The following mirror now contains a multitude of GNAT and miscellaneous Adarelated packages/dependencies, obtained on April 10, 2020 via JFW's method:
READMEs, as well as MSWin and Apple binaries have been omitted. Packages with duplicate names are stored in the "dupes" subdirectories (1, 2, 3, 4, 5, 6, 7)
The same warning as given for the Gentoo repository applies to this collection.
At this time, the Ada collection weighs ~17GB. Aside from the binaries removal, this set was not curated in any way.
Readers are encouraged to mirror the mirrors. Anyone who has done this, is encouraged to leave a comment here, with the URL of the new mirror.
]]>'Watchglass' is a simple program for monitoring the health of a constellation of Bitcoin nodes. The user interface is in the form of an IRC bot, based on my August 2019 'Logotron'. Unlike the latter, this program has no nonstandard library dependencies (i.e. it will operate on a stock Python 2.7 installation.)
The program was written to help in chasing down a possible wedge condition in TRB.
You will need:
Add the above vpatches and seals to your Vset, and press to watchglass_genesis.kv.vpatch watchglass_peers_cmd.kv.vpatch.
Configure and operate as described in README.txt and below.
Currently [2], Watchglass knows how to:
Poll a preconfigured list of hosts, asynchronously, and report the result to the IRC channel in which the poll command had been issued. Each host's response to the version command of the Bitcoin protocol is summarized (including response latency); and, if there is no response within the specified nodetimeout interval, that host is reported busy.
Probe a specified IP (and optionally, a port other than the default 8333) and report similarly to (1). The pertinent bot command, is, unsurprisingly, probe.
Adjust the settings in watchglass.conf to conform to your requirements.
In particular, you will need to provide a set of IRC relays for the bot to connect to; [1] a nick/pw pair; and set of channels to occupy. The default trigger prefix is !w. The nodetimeout setting corresponds to the interval in which a polled node must answer (or otherwise is declared 'busy'.)
nodes.txt is the list of btc nodes that will be polled when a 'poll' command is issued. Presently it contains several TRB nodes which I operate, as well as a few which the former are peering with. For example,
188.121.168.69 8333 unknown
If you want your Watchglass bot to identify a node's operator when issuing output, supply a name in place of the 'unknown' in the corresponding line in nodes.txt.
The provided start_bot.sh can be edited to match your directory layout, and placed into e.g. a @reboot cron job.
The routines of watchglass.py pertaining to the traditional btc protocol are found in the section marked 'BTC Node Interrogator'.
Those pertaining to IRC, are found in the section marked 'IRC Bot'.
An instance of Watchglass is currently operating in my IRC channel on Fleanode. If you would like your TRB node to be included in the polling roster, or would like to have the bot join your channel, please write in. Alternatively, you can use the program by running your own instance.
[1] See 'Logotron Genesis'. The same relay rotation and reconnection logic is used in this bot. The only major change is the addition of locking in the 'speak' routine, to make the bot threadsafe and enable the use of asynchronous threads for interrogating btc nodes.
[2] It is not difficult to add periodic silent polling / event reporting, in place of the manuallytriggered polls. But I have not yet done this. Other potential improvements could include e.g. mempool search for a given transaction, network mapping, etc. There is a more or less complete generalpurpose implementation of the traditional btc 'wire protocol' in Watchglass.
]]>
You will need:
Add the above vpatches and seals to your Vset, and press to ffa_ch20d_litmus_nested_fix.kv.vpatch.
As of Chapter 20D, the versions of Peh and FFA are 250 and 253, respectively. FFA and Peh themselves have not changed from Chapter 19.
Compile Peh:
cd ffacalc gprbuild
... and install it to a path visible in your shell (e.g. /usr/bin.)
Litmus as given in the previous chapter would choke (failing safe, i.e. rejecting the signature) when fed "nested" clearsigned texts. This bug has been fixed:
# 'ASCIIArmoured' PGP signatures have mandatory start and end markers: START_MARKER="^\\\\\BEGIN PGP SIGNATURE\\\\\" END_MARKER="^\\\\\END PGP SIGNATURE\\\\\" ....... CLEAR_MARKER="^\\\\\BEGIN PGP SIGNED MESSAGE\\\\\" .......
To test, invoke Litmus with two arguments: the public key, followed by the input file:
BEGIN PGP SIGNED MESSAGE Hash: SHA512  BEGIN PGP SIGNED MESSAGE Hash: SHA512 This is a testfire GPG 'clearsigned' document written for FFA Chapter 20C.  BEGIN PGP SIGNATURE Version: GnuPG v1.4.10 (GNU/Linux) iQEcBAEBCgAGBQJeGimSAAoJELmCKKABq//HfgsH/1bg0zpL1PoApHE5BMNXmJ7U 7vWKQJ3ZCsGvkUSIgIZESTp6EpGDclBQqvBsdBZ9KVnBpGiIbmKbNjDNlZU5YcWt WQ7FO+r4LiUZOJRiPADWMM7tpI3azXelPyuge7hFe6o3UG0aErSceKXcoFGKFDO2 MCZXkR6Lfd3rYKHcySOEvkoCMQ4ytbEQM4IBE57AqQIpmnI7Mo1033TnDgL2KdXd PHLtedNyaKwzVJw08BtKjzole130GvElSMmxilrHZ0w+2J4uHRuBELyZbJ55vu+x Bx+Q5OpMwz0ZG4vjz8ncZE/nCvK/sZv/RXsdNow95fETH5DKC9HfIP279kKlBRg= =3ozu  END PGP SIGNATURE BEGIN PGP SIGNATURE Version: GnuPG v1.4.10 (GNU/Linux) iQEcBAEBCgAGBQJeHgg9AAoJELmCKKABq//HHywH/3cfUfV8SH2yuxHNBPmk3Yha Nw/40YK52/DJ/H+RUY6Zei2X3tJPYe3vDN2P3iRqKBycpUPtNnCDaF3BAvg7cyRQ PghpykoSeERxIQZaIQWTUyMQRBwoMF90wiQpcgCEXRH3xZo/p0M3M6zPrqKElXf4 essZmM3jsvfU9T8JGho0RPyK1J42fTBCiRb0Y++ZQGWVEwJugtVnQOL76fYmSUpW vDBKgNJfvGOMNTCTLemqD2nn6DZzLN9TOrRlmLvfr0lbzu9rSGAMtRKqozhZeCXf LiTWWVZkCUGr/SEk5olhbHQnfoYMH1V071qJnpv1/5QqIiZ1z2KbP65Ba/i3i0A= =o/9T END PGP SIGNATURE
./litmus.sh asciilifeform.peh asciilifeformclearsignedtwice.txt
... which will yield the output:
VALID GPG RSA signature from asciilifeform <stas@loperos.org>
Naturally, no verification is performed on the embedded clearsigned text (just as in traditional GPG. If you want to operate on the embedded text, you must extract it.)
~To be continued!~
]]>You will need:
Add the above vpatches and seals to your Vset, and press to ffa_ch20c_litmus_clearsigned.kv.vpatch.
As of Chapter 20C, the versions of Peh and FFA are 250 and 253, respectively. FFA and Peh themselves have not changed from Chapter 19.
Compile Peh:
cd ffacalc gprbuild
... and install it to a path visible in your shell (e.g. /usr/bin.)
Litmus now supports GPG "clearsigned" texts. Compatibility with the program given in the previous chapter is retained. The "clearsigned" mode of operation is selected simply by invoking the script with two arguments instead of the usual three:
....... # Whether we are working on a 'clearsigned text' CLEARSIGN_MODE=false # Set up in the selected mode: case $ARGCOUNT in 2) # If given two arguments, verify a 'clearsigned' text file: CLEARSIGN_MODE=true # The processed payload will end up in a temporary file: DATAFILE=$(mktemp)  { echo "Failed to create temp file!" >&2; \ exit $RET_EGGOG; } # On exit, if in 'clearsign' mode, remove temporary file with payload: trap remove_temp_file EXIT # Expect 'Canonical Text Signature' in GPG sig packet turd expect_sig_class=1 ;; 3) # Verify Detached Signature on given Data File (third argument is path): # The given Data file to be verified against the Signature DATAFILE=$3 # i.e. path given on command line # Expect 'Detached Binary Signature' in GPG sig packet turd expect_sig_class=0 ;; *) # If invalid arg count  print usage and abort: echo "Usage: $0 publickey.peh signature.sig datafile" echo " or: $0 publickey.peh clearsigned.txt" exit $RET_EGGOG ;; esac .......
RFC4880 (the document GPG nominally conforms to) specifies certain transformations required to obtain the hashable payload from the document (e.g. imposition of MSDOSstyle line endings) which were implemented as follows:
....... # If we are operating on a 'clearsigned' text file, $DATAFILE will be # an empty temporary file, and the payload is to be extracted to it, ....... if [ $CLEARSIGN_MODE == true ] then # Find position of 'clearsign' payload start marker: CLEAR_MARKER="\\\\\BEGIN PGP SIGNED MESSAGE\\\\\" start_clr=$(grep m 1 n "$CLEAR_MARKER" $SIGFILE  cut d ':' f1) # If payload start marker was not found: if [ "$start_clr" == "" ] then eggog_broken_clearsigned fi # Discard the start marker: start_clr=$(($start_clr + 1)) # The payload ends with the line preceding the sig start: end_clr=$((start_ln  2)) # Find any 'Hash:' headers: start_body=$(tail n "+$start_clr" $SIGFILE  \ grep v n m 1 "^Hash:"  cut d ':' f1) # Skip the above headers and mandatory empty line: start_clr=$(($start_clr + $start_body)) # If there is no payload, or the markers are misplaced, abort: if [ $start_clr ge $end_clr ] then eggog_broken_clearsigned fi # Extract the 'clearsign' payload to the temporary file: cat $SIGFILE  sed n "$start_clr,$end_clr p"  \ sed 's/[ \t]*$//; s/^ //'  \ awk '{printf("%s\r\n",$0)}' \ > $DATAFILE # Remove the trailing CR,LF ending: truncate s 2 $DATAFILE # After this, proceed exactly like with 'detached' sigs, but # with the expected 'class' being 1 rather than 0. fi .......
To verify a "clearsigned" message, invoke Litmus with two arguments: the public key, followed by the input file, e.g.:
./litmus.sh asciilifeform.peh asciilifeformclearsigned.txt
... which will yield the output:
VALID GPG RSA signature from asciilifeform <stas@loperos.org>
~To be continued!~
]]>You will need:
Add the above vpatches and seals to your Vset, and press to ffa_ch20b_litmus_legacy_hashes.kv.vpatch.
As of Chapter 20B, the versions of Peh and FFA are 250 and 253, respectively. FFA and Peh themselves have not changed from Chapter 19.
Compile Peh:
cd ffacalc gprbuild
... and install it to a path visible in your shell (e.g. /usr/bin.)
In the course of experimenting with the subject of the previous Chapter, I found that a number of current and past Vtronicists had misconfigured their GPG, and emit suboptimal signatures (i.e. not SHA512, the strongest of the ancient hashes supported by that utility). And so I added support for these. (Litmus will emit a warning if such a signature is encountered.)
The following routine has been added to Litmus:
# If Sig was made with an unsupported hash algo: eggog_unsupported_hash() { algo=$1 echo "This sig uses an unsupported Digest Algo: $1 !" >&2 exit $RET_EGGOG } ....... # Warnings: achtung() { echo "WARNING: $1" >&2 } ....... # Digest Algo (only certain hash algos are supported) get_sig_bytes 1 turd+=$r hex_to_int sig_digest_algo=$r # If hash algo is supported, get ASN turd and MD_LEN; and if not, eggog: case $sig_digest_algo in 1) ## MD5  NOT SUPPORTED ## eggog_unsupported_hash "MD5" ;; 2) ## SHA1 ## achtung "This sig was made with SHA1, which is cheaply breakable!" achtung "Please contact the signer ($pubkey_owner) !" HASHER="shasum a 1 b" ASN="3021300906052b0e03021a05000414" MD_LEN=20 ;; 3) ## RIPEMD/160  NOT SUPPORTED ## eggog_unsupported_hash "RIPEMD/160" ;; 8) ## SHA256 ## achtung "This sig was made with SHA256; GPG supports SHA512." achtung "Please contact the signer ($pubkey_owner) !" HASHER="shasum a 256 b" ASN="3031300d060960864801650304020105000420" MD_LEN=32 ;; 9) ## SHA384 ## achtung "This sig was made with SHA384; GPG supports SHA512." achtung "Please contact the signer ($pubkey_owner) !" HASHER="shasum a 384 b" ASN="3041300d060960864801650304020205000430" MD_LEN=48 ;; 10) ## SHA512 ## HASHER="shasum a 512 b" ASN="3051300D060960864801650304020305000440" MD_LEN=64 # 512 / 8 == 64 bytes ;; 11) ## SHA224 ## achtung "This sig was made with SHA224; GPG supports SHA512." achtung "Please contact the signer ($pubkey_owner) !" HASHER="shasum a 224 b" ASN="302D300d06096086480165030402040500041C" MD_LEN=28 ;; *) ## Unknown Digest Type ## eggog_unsupported_hash "UNKNOWN (type $sig_digest_algo)" ;; esac # Calculate length (bytes) of the ASN turd for the digest used in the sig: ASN_LEN=$((${#ASN} / 2)) .......
To test, for instance, verification of SHA1 signatures (please stop using SHA1, people! see e.g. here), download this Litmusconverted GPG public key of former contributor Diana Coman, and verify her signature of Chapter 1:
./litmus.sh diana_coman.peh ffa_ch1_genesis.kv.vpatch.diana_coman.sig ffa_ch1_genesis.kv.vpatch
... which will yield the output:
WARNING: This sig was made with SHA1, which is cheaply breakable! WARNING: Please contact the signer (Diana Coman <office@dianacoman.com>) ! VALID GPG RSA signature from Diana Coman <office@dianacoman.com>
~To be continued!~
]]>