If you take a look at the program that implements the Gauss Easter algorithm, you’ll see it’s much longer and much more readable (although it still needs improvements). Compare it to the original algorithm:
a = year mod 19
b = year mod 4
c = year mod 7
k = [year / 100]
p = [(13 + 8k) / 25]
q = [k / 4]
M = (15 - p + k - q) mod 30
N = (4 + k - q) mod 7
d = (19a + M) mod 30
e = (2b + 4c + 6d + N) mod 7
Easter is on 22 + d + e March or d + e - 9 April
Exception 1: if d = 29 and e = 6, Easter is on 19 April rather than 26 April
Exception 2: if d = 28 and e = 6 and (11M + 11) mod 30 < 19, Easter is on 18 April rather than 25 April
What is better? The conciseness of this math? Or the long variable names and splitting into functions, that makes it much more self-explanatory? I guess that the answer is “it depends”. Still, I think that math has too much emphasis on conciseness. I often read a paper and I wonder “what did we say M is?” and go some pages back to remember. Maybe math should take some ideas from programming and relax the rule of using single-letter variables. But of course it’s hard to break with centuries of tradition.