Pascal's Triangle Theorem

I quite like this little paradigm I have come up with here.

I have proved that the combinatoric choice definition and the
recursive Pascal's triangle definition agree.
This commit is contained in:
Jarvis Carroll 2025-11-22 00:31:51 +11:00
commit 4d8d3e5f81
2 changed files with 747 additions and 0 deletions

418
pascal.v Normal file
View File

@ -0,0 +1,418 @@
(* Coq is a powerful language, that can search for proofs automatically, and
then verify them automatically. This results in proofs like *)
Theorem not_not_law_of_excluded_middle: forall A: Prop,
not (not (A \/ not A)).
Proof.
intros A H.
assert (not A); auto.
Qed.
(* How does this proof work? Who knows! You'll have to run `debug auto`
yourself and see. Lots of effort has been put into this sort of automation,
in fact, the entire proof above could have been replaced with a single
'intuition' tactic, which specializes in intuitionist logic tautologies.
While it is useful to know that Coq *can* go that far with automation, it is
important that we remain tasteful in how we apply it. If you are trying to
prove something formally, it is usually because you know that it works in
some cases, and what you really want to know is *why* those cases work.
In particular, in writing proofs about cryptography and number theory, we
will happily indulge in proof automation to expand brackets and collect like
terms, but the 'control flow' of our proof should be explicit. We want to
know what cases we are considering, and why, and what the key insights are
that make those mundane algebraic manipulations useful.
As an example, let's look at a useful but surprisingly involved inductive
proof, that all numbers are either even or odd. First let's define what we
mean by even and odd.
A number is even if it is equal to some other number, doubled. *)
Definition Even (n: nat): Prop :=
exists k: nat, 2 * k = n.
(* And a number is odd if it is one more than that. *)
Definition Odd (n: nat): Prop :=
exists k: nat, 2 * k + 1 = n.
(* Now let's get into the proof. We are going to try to use automation more
tastefully this time, to only do mundane manipulations. Let's import the
'Ring' tactics, which is where Coq defines how to automatically simplify
expressions built out of + and * operations. *)
Require Import Ring.
Require Import ArithRing.
(* Now let's get into the proof. This proof is chosen because it is a good
demonstration of a lot of the features, strengths, and weaknesses of Coq,
all in one place. It will be easier to follow than an 'auto' one-liner, but
it still features a lot of the hidden state that is typical of Coq proofs.
Feel free to read through by yourself, or to step through with the IDE to
see what that hidden state actually looks like on each line, or once you get
the point, that hidden state is hard to follow, feel free to move on to
where we try and address exactly that, by making the state more explicit. *)
Theorem even_or_odd: forall n: nat,
Even n \/ Odd n.
Proof.
intro n.
(* This is an inductive proof. We prove 0 is even or odd, then 1 is even or
odd, then 2 is even or odd, etc. *)
induction n.
(* First prove 0 is even or odd... Let's go with even! *)
*left.
(* We have to choose k so that 2*k = 0. *)
exists 0.
(* Now we have to actually prove that 2*k = 0. In DTT this simplifies to
0 = 0, so we can use reflexivity on 0. *)
reflexivity.
(* Now the inductive case. If n is even or odd, we want to show that n+1 is
even or odd. Check which case we are in first. *)
*destruct IHn as [IsEven | IsOdd].
(* The first case is that n is even, in which case n+1 will be odd. *)
+right.
(* So, we know 2*k = n, and we want to show 2*k' + 1 = n + 1... Clearly
k' = k. *)
destruct IsEven as [k Eq].
exists k.
(* Now Coq gives us the goal 2*k + 1 = n + 1... Let's substitute n in. *)
rewrite <- Eq.
(* Now the goal is 2*k + 1 = 1 + 2 * k. This is clearly true by the
algebraic properties of + and *. We use the 'ring' tactic, even though
natural numbers are only a semiring. The logic still works! *)
ring.
(* That handles the even case. Now, if n is odd, we want to show that n+1 is
even. *)
+left.
(* n is odd, i.e. n = 2*k+1, and we want to show n+1 is even, i.e.
n+1 = 2*k'. Choose k' = k + 1. *)
destruct IsOdd as [k Eq].
exists (k + 1).
(* The goal is 2*(k+1) = 1+n. Substitute n in. *)
rewrite <- Eq.
(* Now the goal is 2*(k+1) = 1 + (2*k + 1)... Let Coq do the algebra. *)
ring.
Qed.
(* So we see that more explicit control flow sets us up for quite nice proofs,
where we can see that the goal follows by simple algebra, algebra which Coq
is prepared to do for us.
Agda and Idris programmers will recognize that Even and Odd could have been
jerry-rigged so that the `ring` tactics weren't required at all, and while
learning that sort of trick is immensely productive for understanding DTT,
it is not very useful for understanding number theory.
The problem remains, however, that we had to write comments to track what
the state of the program was going to be. That is confusing, even with
comments, and who knows if the comments are up to date! They aren't checked,
despite us being in a language specifically designed to check things like
that. Let's make some tactics that let us declare what algebraic expression
we think we are working with, so that our proofs become just a little more
self-documenting.
A very common pattern when proving relations in real math proofs, is to
start with one side of the relation, and to manipulate it bit by bit, until
it matches the other side that we wanted. As long as each manipulation
proves a transitive relation, all those transitive steps can be composed
together into a bigger proof. Let's make some tactics that let us do that,
when our goal is some relation L = R, or Rel L R more generally. *)
(* This first tactic lets us start such a string of manipulations, by declaring
what the LHS of our goal is, effectively just a comment, but checked. *)
Ltac have expr :=
lazymatch goal with
| |- ?Rel expr ?R =>
idtac
| |- ?Rel ?L ?R =>
fail "Have annotation. Expected" expr "but got" L
| |- ?Other =>
fail "Goal is not a relation. Goal:" Other
end.
(* Similarly, this tactic lets us declare what the RHS is, before we do
strong manipulations like ring. *)
Ltac want expr :=
lazymatch goal with
| |- ?Rel ?L expr =>
idtac
| |- ?Rel ?L ?R =>
fail "Want annotation. Expected" expr "but got" R
| |- ?Other =>
fail "Goal is not a relation. Goal:" Other
end.
(* Next, all algebraic manipulations in type theory must be shown to follow
from the lambda calculus rules of beta/eta reduction, or similar, so let's
make a tactic that lets us manipulate our LHS using whatever beta/eta rules
Coq supports. *)
Ltac thatis expr :=
lazymatch goal with
| |- ?Rel ?L ?R =>
unify L expr;
change_no_check (Rel expr R)
| |- ?Other =>
fail "Goal is not a relation. Goal:" Other
end.
(* If beta/eta isn't enough, then we can often specify an intermediate
expression, and get Coq to step there using ring algebra. *)
Ltac rearrange expr :=
lazymatch goal with
| |- ?Rel ?L ?R =>
transitivity expr; [> ring | ]
| |- ?Other =>
fail "Goal is not a relation. Goal:" Other
end.
(* Finally, if we manipulate L using thatis or rewrite tactics, then we will
eventually get to the point where it matches R exactly, at which point one
normally uses the 'reflexivity' tactic, but that is confusing, if we had to
do manipulations precisely because our goal was not reflexive, and further,
reflexivity might sneak more beta/eta manipulations in, making our
annotations irrelevant. This tactic lets us check that L and R match
exactly, no beta/eta required, and only then uses reflexivity to finish the
proof. *)
Ltac done :=
lazymatch goal with
| |- ?Rel ?L ?L =>
reflexivity
| |- ?Rel ?L ?R =>
fail "Chain derivation not done. Left:" L "Right:" R
| |- ?Other =>
fail "Goal is not a relation. Goal:" Other
end.
(* If our goal isn't a relation yet, we might still want to declare it. *)
Ltac havegoal expr :=
lazymatch goal with
| |- expr =>
idtac
| |- ?Other =>
fail "Expected goal to be" expr "but it was" Other
end.
(* When manipulating dysjunctions, like the Even/Odd case coming up, it can
be useful to restate what our current hypotheses are. *)
Ltac weknow newid P :=
lazymatch goal with
| proof: P |- ?Goal =>
rename proof into newid
| |- ?Goal =>
fail "Knowledge assertion. Cannot find proof of" P
end.
Tactic Notation "weknow" ident(newid) ":" constr(P) :=
weknow newid P.
(* Now let's try and prove our theorem again, but using these self-documenting
tactics. *)
Theorem even_or_odd_chained: forall n: nat,
Even n \/ Odd n.
Proof.
induction n.
*havegoal (Even 0 \/ Odd 0).
left; havegoal (Even 0). (* Choose (Even 0) as a new goal. *)
exists 0; havegoal (2 * 0 = 0).
have (2 * 0).
thatis 0.
done.
*weknow IndCase: (Even n \/ Odd n).
destruct IndCase.
+havegoal (Even (S n) \/ Odd (S n)).
weknow IsEven: (Even n).
right; havegoal (Odd (S n)).
destruct IsEven as [k Eq].
weknow Eq: (2 * k = n).
exists k; havegoal (2 * k + 1 = S n).
apply eq_sym. (* Flip the goal, to work right to left. *)
have (S n).
rewrite <- Eq.
have (S (2 * k)).
rearrange (2 * k + 1).
done.
+havegoal (Even (S n) \/ Odd (S n)).
weknow IsOdd: (Odd n).
left; havegoal (Even (S n)).
destruct IsOdd as [k Eq].
exists (S k).
havegoal (2 * S k = S n).
apply eq_sym.
have (S n).
rewrite <- Eq.
have (S (2 * k + 1)).
rearrange (2 * S k).
done.
Qed.
(* That was much longer, but now the proof guides you through its own state,
and if you edit this file so that any of the annotations are wrong, the
proof no longer completes! In general our proofs are going to be pretty long
from now on, but hopefully they are much more pleasant to read.
Now, finally, let's get on with some number theory! First let's look at
Pascal's triangle, since understanding that turns out to be essential to
understanding how to find points on elliptic curves.
Our way of indexing Pascal's triangle is going to be a little weird, though;
in order to avoid having to make special cases for all the zeros to the left
and right of the triangle, we will index the triangle in diagonal strips, so
e.g.
pascal 0 _ = 1
pascal 1 k = k
pascal 2 k = k*(k+1)/2
pascal 3 k = k*(k+1)*(k+2)/6
etc.
To do this without getting dizzy over nested cases, let's get dizzy over
higher order functions instead. Let's define a function that takes a
sequence, and returns a sequence representing the partial sums of that
sequence. *)
Fixpoint partial_sum (seq: nat -> nat) (count: nat): nat :=
match count with
| O => 0
| S count' => partial_sum seq count' + seq count'
end.
(* But Triangular numbers aren't the sum of 0 or more numbers, they are the sum
of one or more numbers! So let's also define a 'sequence' variation. *)
Definition sequence (seq: nat -> nat) (index: nat): nat :=
partial_sum seq (1 + index).
Fixpoint pascal (r: nat): nat -> nat :=
match r with
| 0 => fun _ => 1
| S r' => sequence (pascal r')
end.
Lemma pascal_0: forall r: nat,
pascal r 0 = 1.
Proof.
intro r.
induction r.
*have (pascal 0 0).
thatis 1.
done.
*have (pascal (S r) 0).
thatis (sequence (pascal r) 0).
thatis (pascal r 0).
want 1.
weknow Ind: (pascal r 0 = 1).
exact Ind.
Qed.
Fixpoint factorial (n : nat): nat :=
match n with
| S n' => n * factorial n'
| O => 1
end.
Fixpoint prod_up (k r: nat): nat :=
match k with
| 0 => 1
| S k' => prod_up k' (1 + r) * r
end.
Lemma prod_up_down: forall k r: nat,
prod_up (1 + k) r = (k + r) * prod_up k r.
Proof.
intro k.
induction k; intro r.
*cbn.
havegoal (r + 0 = r * 1).
ring.
*have (prod_up (1 + S k) r).
thatis (prod_up (1 + k) (1 + r) * r).
weknow Ind: (forall r': nat, prod_up (1 + k) r' = (k + r') * prod_up k r').
rewrite (Ind (1 + r)).
have ((k + (1 + r)) * prod_up k (1 + r) * r).
rearrange ((S k + r) * (prod_up k (1 + r) * r)).
thatis ((S k + r) * prod_up (S k) r).
done.
Qed.
Lemma factorial_up: forall n: nat,
factorial n = prod_up n 1.
Proof.
intro n.
induction n.
*reflexivity.
*have (factorial (S n)).
thatis ((S n) * factorial n).
weknow Ind: (factorial n = prod_up n 1).
rewrite Ind.
have ((S n) * prod_up n 1).
rearrange ((n + 1) * prod_up n 1).
rewrite <- (prod_up_down n 1).
have (prod_up (1 + n) 1).
thatis (prod_up (S n) 1).
done.
Qed.
Lemma prod_up_ratio: forall k r: nat,
prod_up k (1 + r) * factorial r = factorial (k + r).
Proof.
intros k.
induction k; intro r.
*cbn.
havegoal (factorial r + 0 = factorial r).
ring.
*have (prod_up (S k) (1 + r) * factorial r).
thatis (prod_up k (2 + r) * (1 + r) * factorial r).
weknow Ind: (forall r': nat, prod_up k (1 + r') * factorial r' = factorial (k + r')).
rearrange (prod_up k (2 + r) * ((1 + r) * factorial r)).
thatis (prod_up k (1 + (1 + r)) * factorial (1 + r)).
rewrite (Ind (1 + r)); have (factorial (k + (1 + r))).
thatis (factorial (k + S r)).
rewrite <- (plus_n_Sm k r); have (factorial (S (k + r))).
thatis (factorial (S k + r)).
done.
Qed.
Theorem pascal_choose: forall r k: nat,
pascal r k * factorial k = prod_up k (S r).
Proof.
intro r.
induction r; intro k.
*cbn.
ring_simplify.
exact (factorial_up k).
*induction k.
**cbn.
ring_simplify.
exact (pascal_0 r).
**have (pascal (S r) (S k) * factorial (S k)).
thatis ((pascal (S r) k + pascal r (S k)) * factorial (S k)).
weknow IndR: (forall k': nat, pascal r k' * factorial k' = prod_up k' (S r)).
(* We have a pascal r (S k) and a factorial (S k), let's get them together. *)
rearrange (pascal (S r) k * factorial (S k) + pascal r (S k) * factorial (S k)).
rewrite (IndR (S k)).
have (pascal (S r) k * factorial (S k) + prod_up (S k) (S r)).
(* Good. Now, look for similar to apply to pascal (S r) k *)
weknow IndK: (pascal (S r) k * factorial k = prod_up k (S (S r))).
(* So we need factorial k. That can be arranged. *)
thatis (pascal (S r) k * (S k * factorial k) + prod_up (S k) (S r)).
rearrange (S k * (pascal (S r) k * factorial k) + prod_up (S k) (S r)).
rewrite (IndK).
have (S k * prod_up k (S (S r)) + prod_up (S k) (S r)).
(* This is good. All the factorial pascal stuff is gone. Now we just need
to factorise these big products. They are pretty close already. *)
thatis (S k * prod_up k (S (S r)) + prod_up k (S (S r)) * S r).
(* Let's rewrite these S's. *)
thatis ((1 + k) * prod_up k (2 + r) + prod_up k (2 + r) * (1 + r)).
(* And collect! *)
rearrange ((k + (2 + r)) * prod_up k (2 + r)).
(* And look at that, the coefficients add up to the next factor. *)
rewrite <- (prod_up_down k (2 + r)).
have (prod_up (1 + k) (2 + r)).
thatis (prod_up (S k) (S (S r))).
done.
Qed.

329
scraps.v Normal file
View File

@ -0,0 +1,329 @@
Theorem pascal_choose: forall k r: nat,
pascal r k * factorial k = prod_up k (1 + r).
Proof.
intros.
induction k.
*have (pascal r 0 * factorial 0).
thatis (pascal r 0 * 1).
rearrange (pascal r 0).
rewrite (pascal_0 r); have 1.
thatis (prod_up 0 (1 + r)).
done.
*have (pascal r (S k) * factorial (S k)).
cbn.
Qed.
Fixpoint pascal (row col : nat): nat :=
match col with
| S prevcol =>
match row with
| S prevrow => pascal prevrow prevcol + pascal prevrow col
| O => 0 (* First row, not first column... Zero! *)
end
| O => 1 (* First column, One! *)
end.
(* Now we would like to show that this matches the combinatoric 'choose'
function, so let's define the numerator and denominator of the 'choose'
function, as two whole-valued functions.
First the denominator of choose, is the familiar factorial function. *)
Fixpoint factorial (n : nat): nat :=
match n with
| S n' => n * factorial n'
| O => 1
end.
(* Next the numerator, is the ratio of two factorials, but really it is just
a small product counding down from some number. Let's implement that
directly. *)
Fixpoint prod_down (count i0 : nat): nat :=
match count with
| O => 1 (* Empty product is 1 *)
| S count' =>
match i0 with
| O => 0 (* Stop counting down at zero. *)
| S iprev => i0 * prod_down count' iprev
end
end.
(* Now before we move on to more complex proofs, let's familiarise ourselves
with these functions by confirming the relationships we expect them to have.
First, prod_down from n to 1 should be the same as the factorial of n. *)
Lemma prod_fact : forall n : nat,
prod_down n n = factorial n.
Proof.
intro n.
induction n.
*want (factorial 0).
have (prod_down 0 0).
thatis (factorial 0).
done.
*want (factorial (S n)).
have (prod_down (S n) (S n)).
thatis (S n * prod_down n n).
rewrite IHn; have (S n * factorial n).
thatis (factorial (S n)).
done.
Qed.
(* That was a nice, simple inductive proof. Let's compare to some of the other
style that Coq can offer. *)
Lemma prod_fact_opaque : forall n : nat,
prod_down n n = factorial n.
Proof.
intro n.
induction n.
*trivial. (* Base case is obvious. *)
*cbn. (* For the inductive case, get Coq to expand everything out. *)
rewrite IHn. (* Do something with the inductive hypothesis. *)
trivial. (* You work out the rest. *)
Qed.
(* That proof is far too verbose, though. Let's go back to auto. *)
Lemma prod_fact_auto : forall n : nat,
prod_down n n = factorial n.
Proof.
intro n.
induction n; cbn; auto.
Qed.
(* Perfect. *)
(* Jokes aside, there is another property which is nice to prove, which is that
prod_down is the ratio of two factorials. We will show this without dipping
into rationals, by instead multiplying prod_down by factorial to get a
bigger factorial. *)
Lemma fact_ratio : forall r k : nat,
prod_down k (k + r) * factorial r = factorial (k + r).
Proof.
intros r k.
induction k.
*want (factorial (0 + r)).
have (prod_down 0 (0 + r) * factorial r).
thatis (1 * factorial r).
thatis (factorial r + 0).
rearrange (factorial r).
thatis (factorial (0 + r)).
done.
*want (factorial (S k + r)).
have (prod_down (S k) (S k + r) * factorial r).
thatis ((S k + r) * prod_down k (k + r) * factorial r).
weknow Ind: (prod_down k (k + r) * factorial r = factorial (k + r)).
rearrange ((S k + r) * (prod_down k (k + r) * factorial r)).
rewrite Ind; have ((S k + r) * (factorial (k + r))).
thatis (factorial (S k + r)).
done.
Qed.
(* Another manipulation we can do to these prod_down sequences, is strip one
term off the bottom. *)
Lemma prod_down_r : forall r k : nat,
prod_down (1 + k) (1 + k + r) = prod_down k (1 + k + r) * (S r).
Proof.
intros.
induction k.
*have (prod_down (1 + 0) (1 + 0 + r)).
thatis ((1 + r) * 1).
rearrange (1 * (1 + r)).
thatis (prod_down 0 (1 + 0 + r) * S r).
done.
*have (prod_down (1 + S k) (1 + S k + r)).
thatis (prod_down (2 + k) (2 + k + r)).
thatis ((2 + k + r) * prod_down (1 + k) (1 + k + r)).
weknow Ind: (prod_down (1 + k) (1 + k + r) = prod_down k (1 + k + r) * S r).
rewrite Ind.
have ((2 + k + r) * (prod_down k (1 + k + r) * S r)).
rearrange (((2 + k + r) * prod_down k (1 + k + r)) * S r).
thatis (prod_down (S k) (1 + S k + r) * S r).
done.
Qed.
Lemma prod_down_1: forall r: nat,
prod_down 1 r = r.
Proof.
intros.
destruct r.
Lemma prod_down_r0 : forall r k : nat,
prod_down (1 + k) (k + r) = prod_down k (k + r) * r.
Proof.
intros.
induction k.
*have (prod_down (1 + 0) (0 + r)).
cbn.
thatis ((r + 0) * 1).
rearrange (1 * r).
thatis (prod_down 0 (0 + r) * r).
done.
*have (prod_down (1 + S k) (1 + S k + r)).
thatis (prod_down (2 + k) (2 + k + r)).
thatis ((2 + k + r) * prod_down (1 + k) (1 + k + r)).
weknow Ind: (prod_down (1 + k) (1 + k + r) = prod_down k (1 + k + r) * S r).
rewrite Ind.
have ((2 + k + r) * (prod_down k (1 + k + r) * S r)).
rearrange (((2 + k + r) * prod_down k (1 + k + r)) * S r).
thatis (prod_down (S k) (1 + S k + r) * S r).
done.
(* So we have defined factorials, and ratios between factorials... Now we could
try and divide one by the other to get a choose function, but we are sort of
in a bind, in that we need to know the numbers are divisible by one another,
but to show that constructively, we need to actually compute the whole
number result anyway. Fortunately our goal was to show that choose matches
the whole number given by Pascal's triangle, so let's approach it that way
instead, and show that pascak * factorial = prod_down. *)
Definition weird_fun (col row: nat) :=
S col * prod_down col row + prod_down (S col) row.
Theorem pascal_choose_weird_fun : forall row col: nat,
pascal row col * factorial col = prod_down col row ->
pascal row (S col) * factorial (S col) = prod_down (S col) row ->
pascal (S row) (S col) * factorial (S col) = weird_fun col row.
Proof.
intros.
have (pascal (S row) (S col) * factorial (S col)).
thatis ((pascal row col + pascal row (S col)) * factorial (S col)).
rearrange (pascal row col * factorial (S col) + pascal row (S col) * factorial (S col)).
weknow AboveCase: (pascal row col * factorial col = prod_down col row).
weknow DiagCase: (pascal row (S col) * factorial (S col) = prod_down (S col) row).
rewrite DiagCase; have (pascal row col * factorial (S col) + prod_down (S col) row).
thatis (pascal row col * ((S col) * factorial col) + prod_down (S col) row).
rearrange (S col * (pascal row col * factorial col) + prod_down (S col) row).
rewrite AboveCase; have (S col * prod_down col row + prod_down (S col) row).
thatis (weird_fun col row).
done.
Qed.
Theorem pascal_choose_weird_step2 : forall r k: nat,
weird_fun k (1 + k + r) = prod_down (1 + k) (2 + k + r).
Proof.
intros.
have (weird_fun k (1 + k + r)).
thatis (S k * prod_down k (1 + k + r) + prod_down (1 + k) (1 + k + r)).
rewrite (prod_down_r r k).
have (S k * prod_down k (1 + k + r) + prod_down k (1 + k + r) * S r).
rearrange ((2 + k + r) * prod_down k (1 + k + r)).
thatis (prod_down (1 + k) (2 + k + r)).
done.
Qed.
Theorem pascal_choose_weird_step3 : forall r k: nat,
weird_fun k (k + r) = prod_down (1 + k) (1 + k + r).
Proof.
intros.
destruct r.
+have (weird_fun k (k + 0)).
thatis (S k * prod_down k (k + 0) + prod_down (1 + k) (k + 0)).
rewrite <- (plus_n_O k).
intros.
have (weird_fun k (1 + k + r)).
thatis (S k * prod_down k (1 + k + r) + prod_down (1 + k) (1 + k + r)).
rewrite (prod_down_r r k).
have (S k * prod_down k (1 + k + r) + prod_down k (1 + k + r) * S r).
rearrange ((2 + k + r) * prod_down k (1 + k + r)).
thatis (prod_down (1 + k) (2 + k + r)).
done.
Qed.
apply eq_sym.
shelve.
thatis (S (k + r) * prod_down k (k + r)).
thatis (prod_down (1 + k) (1 + k + r)).
done.
Theorem pascal_choose_weird_step : forall row col: nat,
weird_fun col row = prod_down (S col) (S row).
Proof.
intros.
have (weird_fun col row).
thatis (S col * prod_down col row + prod_down (S col) row).
etransitivity.
shelve.
thatis (S row * prod_down col row).
thatis (prod_down (S col) (S row)).
done.
Definition normal_fun (col row: nat) :=
prod_down (S col) (S row).
Definition suspicious_fun (col row: nat) :=
(S row) * prod_down col row.
Definition f (col row: nat) :=
(weird_fun col row, normal_fun col row, suspicious_fun col row).
Theorem pascal_choose_case_ss : forall row col: nat,
pascal row col * factorial col = prod_down col row ->
pascal row (S col) * factorial (S col) = prod_down (S col) row ->
pascal (S row) (S col) * factorial (S col) = prod_down (S col) (S row).
Theorem pascal_choose : forall row col : nat,
pascal row col * factorial col = prod_down col row.
Proof.
intro row.
induction row.
*intro col.
destruct col.
+reflexivity.
+reflexivity.
*destruct col.
+reflexivity.
+have (pascal (S row) (S col) * factorial (S col)).
thatis ((pascal row col + pascal row (S col)) * factorial (S col)).
rearrange (pascal row col * factorial (S col) + pascal row (S col) * factorial (S col)).
weknow Ind: (forall col': nat, pascal row col' * factorial col' = prod_down col' row).
rewrite (Ind (S col)); have (pascal row col * factorial (S col) + prod_down (S col) row).
thatis (pascal row col * ((S col) * factorial col) + prod_down (S col) row).
rearrange (S col * (pascal row col * factorial col) + prod_down (S col) row).
rewrite (Ind col); have (S col * prod_down col row + prod_down (S col) row).
destruct row.
++destruct col.
+++reflexivity.
+++cbn.
ring.
++destruct col.
+++reflexivity.
+++ring.
ring.
intros row col.
induction row.
*destruct col.
+have (pascal 0 0 * factorial 0).
thatis 1.
thatis (prod_down 0 0).
done.
+have (pascal 0 (S col) * factorial (S col)).
thatis 0.
thatis (prod_down (S col) 0).
done.
*destruct col.
+have (pascal (S row) 0 * factorial 0).
thatis 1.
thatis (prod_down 0 (S row)).
done.
+cbn.
want (prod_down col (S row)).
have (pascal (S row) col * factorial col).
want (prod_down col 0).
have (pascal 0 col * factorial col)