diff --git a/public/404.html b/public/404.html index a6febbf..0b81e42 100644 --- a/public/404.html +++ b/public/404.html @@ -1,4 +1,173 @@ -
+ +
+ +This summer, I had the pleasure of working under Jonah Adelman in Prof. Stephen Leone’s group at UC Berkeley. Our group specializes in attosecond transient absorption pump-probe spectroscopy via high-harmonic generation, and my lab specifically focuses on applying this technique towards solid-state materials. Over the course of two months, my main objective was familiarizing myself with the first principles governing this spectroscopic method, and to conduct an experiment of my own, namely cross-polarizing the VIS-NIR pump beam and XUV probe beam in our setup to quantify the potential differences in absorption in elemental Tellurium. You can see my end of summer presentation here. Look in the speaker notes for details! One thing to note was following my presentation, I conducted the alternating wave plate scan mentioned in the “Future Steps” slide. Unfortunately, we found no difference in the absorption or phonon generation between the co-polarized or cross-polarized pump-probe measurements. This likely indicates that the anisotropic characteristics of Tellurium are not manifested in the effects of core-level excitation.
This summer, I had the pleasure of working under Jonah Adelman in Prof. Stephen Leone’s group at UC Berkeley. Our group specializes in attosecond transient absorption pump-probe spectroscopy via high-harmonic generation, and my lab specifically focuses on applying this technique towards solid-state materials. Over the course of two months, my main objective was familiarizing myself with the first principles governing this spectroscopic method, and to conduct an experiment of my own, namely cross-polarizing the VIS-NIR pump beam and XUV probe beam in our setup to quantify the potential differences in absorption in elemental Tellurium. You can see my end of summer presentation here. Look in the speaker notes for details! One thing to note was following my presentation, I conducted the alternating wave plate scan mentioned in the “Future Steps” slide. Unfortunately, we found no difference in the absorption or phonon generation between the co-polarized or cross-polarized pump-probe measurements. This likely indicates that the anisotropic characteristics of Tellurium are not manifested in the effects of core-level excitation.
+ +It was around sophomore year of high school when I first learned about the odd fact that cow farts play a considerable role in the release of methane, a potent greenhouse gas, into our atmosphere. This surprising fact led me down a rabbit hole to understand the causes and effects of climate change. Around the same time, I learned and grew skeptical of a relatively new technology, electromagnetic field therapy, which claimed to boost cell proliferation and nutrient circulation in living systems. After thorough research into the technology, I grew inspired to leverage it to address the age old problem of climate change. Several months and countless cold emails later, I was granted bench space and mentorship at the C1-biocatalysis lab at San Diego State University under Dr. Kalyuzhnaya and Richard Hamilton. Read my publication here!
It was around sophomore year of high school when I first learned about the odd fact that cow farts play a considerable role in the release of methane, a potent greenhouse gas, into our atmosphere. This surprising fact led me down a rabbit hole to understand the causes and effects of climate change. Around the same time, I learned and grew skeptical of a relatively new technology, electromagnetic field therapy, which claimed to boost cell proliferation and nutrient circulation in living systems. After thorough research into the technology, I grew inspired to leverage it to address the age old problem of climate change. Several months and countless cold emails later, I was granted bench space and mentorship at the C1-biocatalysis lab at San Diego State University under Dr. Kalyuzhnaya and Richard Hamilton. Read my publication here!
+ +I do not ride on planes often, but when I do I always think of the same two questions.
Why do I always have to turn the volume of my earbuds higher when I am on an airplane? Is it because of the lower pressure of the cabin compared to the near sea-level altitude that I am used to? Or is it simply because the loud engines drown out the noise of my music?
How do airplane toilets work? Why do they flush with such vigour compared to the one’s at home?
It turns out the answers to each of these questions are rather interesting!
Commercial airplanes fly best at high altitudes. The cruising altitude for an average 737 is around 36,000 feet (or 11km). Flying at this height means greater fuel efficiency and less turbulence. While this altitude may be good for the plane, it certainly isn’t for humans. To make sure we can breath, airplanes must maintain a lower cabin pressure, usually kept at around the same air pressure we would experience being atop an 8,000 foot (2.5km) tall mountain. You might ask why not sea level? Because a pressure too high inside the cabin would mean a greater differntial between the inside and outside of the cabin, placing more stress on the fusealge to maintain its structure. So now the question becomes whether the lower air pressure of an 8,000 foot mountain (meaning less air molecules present than at sea level) makes a meaningful difference to the propogation of sound waves. And it turns out this is negligible. If this were true, then listening to anything would be more difficult at somewhere like my favorite ski resort, Mammoth Mountain, which it isn’t! Although the cabin pressure is not the cause of dampened sound, it could be the pressure difference between each of the cavities in your ear (formed during ascent or descent of the plane) which could distort the sound. Otherwise, it is most definitely the loud sound of the engine or the crying baby that is making you blast your music!
Most toilets found in homes are gravity based, meaning a tank of water fills above the toilet drain and is released when a lever is pressed. The force of gravity then drives the water into the toilet bowl, bringing the waste with it. On an airplane however, where minimizing weight is most important, vacuum toilets are used instead. Vacuum toilets leverage the inherent pressure difference between the outside and inside of the plane. The toilet rests at a low air pressure identical to the air pressure outside of the cabin. When the button is pressed to flush, the higher pressure cabin air rushes in, forcing the waste down with it! If the plane is on the runway, a manual vacuum is created using a pump. Such an elegant solution!
I do not ride on planes often, but when I do I always think of the same two questions.
+Why do I always have to turn the volume of my earbuds higher when I am on an airplane? Is it because of the lower pressure of the cabin compared to the near sea-level altitude that I am used to? Or is it simply because the loud engines drown out the noise of my music?
+How do airplane toilets work? Why do they flush with such vigour compared to the one’s at home?
+It turns out the answers to each of these questions are rather interesting!
+Commercial airplanes fly best at high altitudes. The cruising altitude for an average 737 is around 36,000 feet (or 11km). Flying at this height means greater fuel efficiency and less turbulence. While this altitude may be good for the plane, it certainly isn’t for humans. To make sure we can breath, airplanes must maintain a lower cabin pressure, usually kept at around the same air pressure we would experience being atop an 8,000 foot (2.5km) tall mountain. You might ask why not sea level? Because a pressure too high inside the cabin would mean a greater differntial between the inside and outside of the cabin, placing more stress on the fusealge to maintain its structure. So now the question becomes whether the lower air pressure of an 8,000 foot mountain (meaning less air molecules present than at sea level) makes a meaningful difference to the propogation of sound waves. And it turns out this is negligible. If this were true, then listening to anything would be more difficult at somewhere like my favorite ski resort, Mammoth Mountain, which it isn’t! Although the cabin pressure is not the cause of dampened sound, it could be the pressure difference between each of the cavities in your ear (formed during ascent or descent of the plane) which could distort the sound. Otherwise, it is most definitely the loud sound of the engine or the crying baby that is making you blast your music!
+Most toilets found in homes are gravity based, meaning a tank of water fills above the toilet drain and is released when a lever is pressed. The force of gravity then drives the water into the toilet bowl, bringing the waste with it. On an airplane however, where minimizing weight is most important, vacuum toilets are used instead. Vacuum toilets leverage the inherent pressure difference between the outside and inside of the plane. The toilet rests at a low air pressure identical to the air pressure outside of the cabin. When the button is pressed to flush, the higher pressure cabin air rushes in, forcing the waste down with it! If the plane is on the runway, a manual vacuum is created using a pump. Such an elegant solution!
+Had the chance to make a quick day trip to San Jose and visit CGA! Definitely a chill park overall, nothing too crazy, though all the rides were fairly bumpy for some reason lol.
Had the chance to make a quick day trip to San Jose and visit CGA! Definitely a chill park overall, nothing too crazy, though all the rides were fairly bumpy for some reason lol.
+Was in Austin and decided to make the hour drive to San Antonio to visit one of Six Flags’ flagship parks on a day with only ~9% of full capacity!
Was in Austin and decided to make the hour drive to San Antonio to visit one of Six Flags’ flagship parks on a day with only ~9% of full capacity!
+After going to Six Flags Magic Mountain for the second time today, I wanted to make a quick ranking of all the rides I have ridden there (coasters and flats)!
After going to Six Flags Magic Mountain for the second time today, I wanted to make a quick ranking of all the rides I have ridden there (coasters and flats)!
+Singular Value Decomposition (SVD) is probably one of the coolest concepts in mathematics I have learned so far. Seen by many as the grand finale of an introductory linear algebra course, SVD combines many pervasive topics seen throughout physics and applied math, including eigenvalues/eigenvectors and unitary matrices. Although SVD was not covered in my introductory mathematical physics course, I tried my best to develop a basic understanding of this factorization tool so that I could comfortably use it when decomposing transient absorption data in my work at the Leone Group.
Note: As a prerequisite to this tidbit, I recommend watching 3Blue1Brown’s Essence of Linear Algebra series to gain intuition into how matrices act as linear transformations on some vector space. His visualizations are truly unmatched!
I think its best to begin with eigenvalue decomposition. Suppose we have some arbitrary matrix \(A \in \mathbb{C}^{n \times n}\) with n linearly independent eigenvectors. Eigenvalue decomposition allows you to represent this linear transformation as a composition of 3 other matrices, namely
$$A = Q \Lambda Q^{-1}$$As is convention in matrix multiplication, we interpret this composition from right to left. \(Q^{-1}\) is an nxn matrix whose columns contain the original basis written in terms of the eigenvectors of \(A\), \(\Lambda\) is the diagonal matrix containing the eigenvalues of \(A\), and \(Q\) is an nxn matrix whose columns contain the eigenvectors written in terms of the original basis. More eloquently, what this essentially does when applied to some arbitrary vector is first transform the vector space into one represented by the eigenvectors. Then, because eigenvectors are those that solely scale under some linear transformation, our original transformation matrix A is nothing more than a scaling of this eigenvector space. Finally, applying matrix \(Q\) returns us back into the vector space described by our original basis.
Remember that eigenvectors and their corresponding eigenvalues only exist for square matrices because non-square ones encode some dimensionality reduction or extension. How then are we able to generalize this idea to non-square matrices? Enter singular value decomposition.
A quick search online often returns a definition similar to this: Any matrix \(M \in \mathbb{C}^{m \times n}\) can be unconditionally decomposed into three matrices,
$$M = U \Sigma V^{\dagger}$$where U is an mxm unitary matrix, \(\Sigma\) is an mxn diagonal matrix, and V\(^{\dagger}\) is the conjugate transpose of an nxn unitary matrix. But what do these matrices do, what do they contain, and why?
Well it turns out that there lies a very intuitive interpretation if we just step back and think about how we can find similarities within a dataset in the first place. Naturally, it would make sense that we take the inner product between each combination of columns of our matrix M in order to quantify the orthogonality of each of our data points with respect to one another. \(M^{\dagger}M\) does exactly this by creating a column-wise correlation matrix for M. Equally motivated, we do the same for our rows of M, namely \(MM^{\dagger}\).
Great, we have these column-wise and row-wise correlation matrices for M, but how do we extract the most meaningful information from them? It makes sense then to try finding the eigenvectors and eigenvalues of these matrices.
The diagonal elements of a correlation matrix represents the variance of each variable, while the off-diagonal elements represent the covariance between variables. When applying a correlation matrix onto some vector space, it acts as a linear transformation, stretching or shrinking hyper-dimensional space in each direction depending on whether each pair of dimensions have high or low correlation. Because eigenvectors are those that do not orient and only scale following a transformation, the eigenvectors of a correlation matrix are the principle axis of the data. In other words, they are the directions in which the data has the most variance. We use our column-wise eigenvectors (called the left singular vectors) to form the columns of our matrix \(U\), and the conjugate transpose of our row-wise eigenvectors (called the right singular vectors) to form the columns of \(V^{\dagger}\). Lastly, our diagonal matrix is formed by the square roots of the shared eigenvalues (shared because both are PSD matrices).
Assuming SVD is true, a simple proof tying all of this together follows.
$$ M^{\dagger}M = V \Sigma U^{\dagger} U \Sigma V^{\dagger} = V \Sigma^2 V^{\dagger} \implies M^{\dagger}M V = V \Sigma^2 $$ -$$ MM^{\dagger} = U \Sigma V^{\dagger} V \Sigma U^{\dagger} = U \Sigma^2 U^{\dagger} \implies MM^{\dagger} U = U \Sigma^2 $$Note: Both U and V are unitary matrices because they are formed by the orthonormal eigenvectors of their respective correlation matrices, and the eigenvectors are orthonormal because the correlation matrices are symmetric.
We see clearly then that our final expressions take the form of the characteristic equation, where \(U\) and \(V\) are the eigenvectors and \(\Sigma^2\) are the eigenvalues.
Throughout my math courses, I have always preferred geometric intuition over any other. However in the case of SVD, I find it most satisfying and complete to see it as a decomposition that captures the dominant correlations between each variable in a dataset. Sure, SVD can be visualized as first a rotation/reflection governed by \(V^{\dagger}\), followed by some scaling factor \(\Sigma\) and dimensionality reducer/extender, followed by a final rotation/reflection governed by \(U\). But I feel this interpretation does not emphasize enough the construction behind these unitary and diagonal matrices.
In closing, many of these data analysis techniques that we normally abstract away by calling some NumPy function, have far deeper meanings than we would oftentimes expect. I have only barely covered the surface of this topic, and would love to learn more eventually. One of the most important lessons I have learned throughout the past year is to balance and access the opportunity costs of the concepts one dedicates time to understand. If you would like any clarification on these topics, I highly recommend watching Steve Brunton’s series along with Visual Kernel’s video. For those seeking more, I found this, this, and this helpful.
Singular Value Decomposition (SVD) is probably one of the coolest concepts in mathematics I have learned so far. Seen by many as the grand finale of an introductory linear algebra course, SVD combines many pervasive topics seen throughout physics and applied math, including eigenvalues/eigenvectors and unitary matrices. Although SVD was not covered in my introductory mathematical physics course, I tried my best to develop a basic understanding of this factorization tool so that I could comfortably use it when decomposing transient absorption data in my work at the Leone Group.
+Note: As a prerequisite to this tidbit, I recommend watching 3Blue1Brown’s Essence of Linear Algebra series to gain intuition into how matrices act as linear transformations on some vector space. His visualizations are truly unmatched!
+I think its best to begin with eigenvalue decomposition. Suppose we have some arbitrary matrix \(A \in \mathbb{C}^{n \times n}\) with n linearly independent eigenvectors. Eigenvalue decomposition allows you to represent this linear transformation as a composition of 3 other matrices, namely
+$$A = Q \Lambda Q^{-1}$$ +As is convention in matrix multiplication, we interpret this composition from right to left. \(Q^{-1}\) is an nxn matrix whose columns contain the original basis written in terms of the eigenvectors of \(A\), \(\Lambda\) is the diagonal matrix containing the eigenvalues of \(A\), and \(Q\) is an nxn matrix whose columns contain the eigenvectors written in terms of the original basis. More eloquently, what this essentially does when applied to some arbitrary vector is first transform the vector space into one represented by the eigenvectors. Then, because eigenvectors are those that solely scale under some linear transformation, our original transformation matrix A is nothing more than a scaling of this eigenvector space. Finally, applying matrix \(Q\) returns us back into the vector space described by our original basis.
+Remember that eigenvectors and their corresponding eigenvalues only exist for square matrices because non-square ones encode some dimensionality reduction or extension. How then are we able to generalize this idea to non-square matrices? Enter singular value decomposition.
+A quick search online often returns a definition similar to this: Any matrix \(M \in \mathbb{C}^{m \times n}\) can be unconditionally decomposed into three matrices,
+$$M = U \Sigma V^{\dagger}$$ +where U is an mxm unitary matrix, \(\Sigma\) is an mxn diagonal matrix, and V\(^{\dagger}\) is the conjugate transpose of an nxn unitary matrix. But what do these matrices do, what do they contain, and why?
+Well it turns out that there lies a very intuitive interpretation if we just step back and think about how we can find similarities within a dataset in the first place. Naturally, it would make sense that we take the inner product between each combination of columns of our matrix M in order to quantify the orthogonality of each of our data points with respect to one another. \(M^{\dagger}M\) does exactly this by creating a column-wise correlation matrix for M. Equally motivated, we do the same for our rows of M, namely \(MM^{\dagger}\).
+Great, we have these column-wise and row-wise correlation matrices for M, but how do we extract the most meaningful information from them? It makes sense then to try finding the eigenvectors and eigenvalues of these matrices.
+The diagonal elements of a correlation matrix represents the variance of each variable, while the off-diagonal elements represent the covariance between variables. When applying a correlation matrix onto some vector space, it acts as a linear transformation, stretching or shrinking hyper-dimensional space in each direction depending on whether each pair of dimensions have high or low correlation. Because eigenvectors are those that do not orient and only scale following a transformation, the eigenvectors of a correlation matrix are the principle axis of the data. In other words, they are the directions in which the data has the most variance. We use our column-wise eigenvectors (called the left singular vectors) to form the columns of our matrix \(U\), and the conjugate transpose of our row-wise eigenvectors (called the right singular vectors) to form the columns of \(V^{\dagger}\). Lastly, our diagonal matrix is formed by the square roots of the shared eigenvalues (shared because both are PSD matrices).
+Assuming SVD is true, a simple proof tying all of this together follows.
+$$ M^{\dagger}M = V \Sigma U^{\dagger} U \Sigma V^{\dagger} = V \Sigma^2 V^{\dagger} \implies M^{\dagger}M V = V \Sigma^2 $$ +$$ MM^{\dagger} = U \Sigma V^{\dagger} V \Sigma U^{\dagger} = U \Sigma^2 U^{\dagger} \implies MM^{\dagger} U = U \Sigma^2 $$ +Note: Both U and V are unitary matrices because they are formed by the orthonormal eigenvectors of their respective correlation matrices, and the eigenvectors are orthonormal because the correlation matrices are symmetric.
+We see clearly then that our final expressions take the form of the characteristic equation, where \(U\) and \(V\) are the eigenvectors and \(\Sigma^2\) are the eigenvalues.
+Throughout my math courses, I have always preferred geometric intuition over any other. However in the case of SVD, I find it most satisfying and complete to see it as a decomposition that captures the dominant correlations between each variable in a dataset. Sure, SVD can be visualized as first a rotation/reflection governed by \(V^{\dagger}\), followed by some scaling factor \(\Sigma\) and dimensionality reducer/extender, followed by a final rotation/reflection governed by \(U\). But I feel this interpretation does not emphasize enough the construction behind these unitary and diagonal matrices.
+In closing, many of these data analysis techniques that we normally abstract away by calling some NumPy function, have far deeper meanings than we would oftentimes expect. I have only barely covered the surface of this topic, and would love to learn more eventually. One of the most important lessons I have learned throughout the past year is to balance and access the opportunity costs of the concepts one dedicates time to understand. If you would like any clarification on these topics, I highly recommend watching Steve Brunton’s series along with Visual Kernel’s video. For those seeking more, I found this, this, and this helpful.
+ +