I first argue for general identically distributed X1,X2 that the conditional mean of Y1 conditional on Y2 is constant 0. Based on this, I argue that the covariance of Y1,Y2 is 0. Then, under normality, zero covariance implies independence.
The conditional mean
Intuition: X1+X2=y does not imply anything about which component contributed more to the sum (e.g., X1=x,X2=y−x is as likely as X1=y−x,X2=x). Thus, the expected difference must be 0.
Proof: X1 and X2 have identical distribution and X1+X2 is symmetric with respect to the indexing. Thus, for symmetry reasons, the conditional distribution X1∣Y2=y must be equal to the conditional distribution X2∣Y2=y. Hence, the conditional distributions also have the same mean, and
E(Y1∣Y2=y)=E(X1−X2∣X1+X2=y)=E(X1∣X1+X2=y)−E(X2∣X1+X2=y)=0.
(Caveat: I did not consider the possibility that the conditional mean might not exist.)
Constant conditional mean implies zero correlation/covariance
Intuition: correlation measures how much Y1 tends to increase when Y2 increases. If observing Y2 never changes our mean of Y1, Y1 and Y2 are uncorrelated.
Proof: By definition, covariance is
Cov(Y1,Y2)=E[(Y1−E(Y1))(Y2−E(Y2))]
to this expectation, we apply the law of iterated expectations: take the expectation of the conditional expectation conditional on
Y2:
=E[E[(Y1−E(Y1))(Y2−E(Y2))∣Y2]]=E[(Y2−E(Y2))E[Y1−E(Y1)∣Y2]].
Recall that the conditional mean was shown to be independent of
Y2 and thus the expression simplifies as
=E[(Y2−E(Y2))E[Y1−E(Y1)]]
but the inner expectation is
0 and we get
=E[(Y2−E(Y2))×0]=0.
Independence
Just by assuming identical distributions for X1,X2, it was shown that Y1 and Y2 are uncorrelated. When X1,X2 are jointly normal (for example, iid. normal as in the question), their linear combinations Y1,Y2 are also jointly normal and thus uncorrelatedness implies independence.