In the series, Viktor Blåsjö, a professor and an emerging historian of mathematics, told a mindblowing story about Galileo which is nothing like what we are usually told about. We have been told by the teachers, by the media, and by the mainstream science philosophers, that Galileo was the “father of modern science.” But to the contrary, Galileo was not only poor in mathematics, but also a dilettante physicist. Let me name a few things from the podcast series and you will know how interesting it is:
Of course, all of these sound like wild claims if you are hearing them for the first time. So I would encourage any curious minds to go listen to the podcast or read the corresponding monograph then judge for themselves.
This new interpretation of Galileo has solved one of my longstanding puzzles for me. In all the serious science textbooks, the most important discoveries are often associated with great mathematicians (such as the principles and laws associated with Archimedes, Kepler, Newton, Bernoulli, Euler, Gauss). Why then has Galileo, “the father of modern science,” never been linked to any of those physical laws, other than the folklore of the “Pisa experiment”? Because he lacked the mathematical ability to do serious physics.
Studying the history of mathematics allows us to understand the more true history of science. But there are more reasons to study the history.
We know that Leibniz was one of the inventors of calculus. But do you know that Leibniz did not actually prove the fundamental theorem of calculus? In fact, he did not worry much about the foundation of calculus, certain not the “rigorous definition” of derivatives.
What Leibniz cared most about was the transcendental curves, that is, the graphs of transcendental functions. According to the inspiring book Transcendental Curves in the Leibnizian Calculus, again by Blåsjö, “the problem of transcendental curves was to him (Leibniz) the guiding star for the better part of his mathematical works throughout his life.”
To Leibniz, functions like $\log(x)$ and $\cosh(x)$ are nothing but notations; the claim that $y=\cosh(x)$ solves the equation $y’'=y$ makes no sense. The important thing to Leibniz was that someone can actually graph a function like $\log(x)$, or calculate its value given any input $x$. After all, what’s the point of writing the symbol “$\log(x)$,” or naming a function “hyperbolic,” if you can’t even find their values? Therefore, to Leibniz, an expression like $\int_1^x \frac{1}{t}\,dt$ makes much more sense than $\log(x)$; being able to draw a catenary by hanging a chain is more important than simply writing $\cosh(x)=(e^x+e^{x})/2$.
The same can be said about the other common transcendental functions such as $\sin(x)$ and $\cos(x)$. Indeed, students are being told that sinusoidal functions are “elementary functions,” but how “elementary” is it if you can’t even tell what $\sin(1)$ is right away? People still love to see animations, such as this one and this one, that can help them understand what trigonometric functions are.
The discussions above give us another important reason to study the history of mathematics. History tells us what are the “natural” things to pay attention to when we first teach or learn a mathematical concept. Knowing why we need transendental functions in the first place is understanding; telling people that these functions are elementary is indoctrination.
True education is about understanding. Unfortunately, a lot of the schooling nowadays are about indoctrination (and probably daycare as well).
In Blåsjö’s History of Mathematics course, he contrasted two ways of studying history of thought:
Historians who enforce their own beliefs on people from the past cannot find out the true history of Galileo, let alone the true history of science. Teachers who enforce their own thoughts on students cannot inspire anyone to understand.
Next time you are told that somethings are conventional or have been the way they are, just remember this powerful quote from an amused chimp:
]]>If the news are fake, imagine history.
is $f(x)=Ce^x$ for an arbitrary constant $C$. We were told that this solution can be obtained either by observation and guessing (then it’s easy to verify solution), or by a separation of variable (e.g. integrate $dy/y=dx$). But these approaches don’t tell us what to do for higherorder equations, such as $f’'3f’+2f=0$.
The operational calculus provides a convenient way to algebraically derive the solution to these linear ODEs. The derivation goes as follows.
Similarly we can show that $(Da)f(x)=0$ has a solution $f(x)=Ce^{ax}$. Then for a higherorder equation such as $f’'3f’+2f=0$, we can rewrite it as $(D^23D+2)f=0$ which can be then factored into $(D1)(D2)f(x)=0$, implying either $(D1)f=0$ or $(D2)f=0$.
Operational calculus reduces a differential equation into an algebraic equation, which is often much easier to solve.
The calculus of finite differences is a brilliant extension of the operational calculus to discrete quantities. The most important observation is that, given the shift operator
there is this remarkable exponential map, $E=\exp(hD)$, that connects the continuous and discrete worlds. This exponential map is simply a different way of writing the Taylor series expansion:
From here, a lot of the great things can happen. For example, if we want to derive a forward difference formula such as
we can define the forward difference operator
and write down the relationship
Then it follows naturally that
Truncating this series at the first term gives back the firstorder difference formula $f’(x)\approx\frac{1}{h}\Delta f(x)$ as before; truncating at the second term gives the secondorder formula
and so on.
Another striking application of such finite difference calculus is to derive the more advanced EulerMaclaurin formula, which is in some sense a discrete version of the fundamental theorem of calculus.
As we have mentioned in the first part of this article, the fundamental theorem of calculus can be written in terms of operators as
So we would like to know if we replace the differential operator $D$ with the difference operator $\Delta$, how would this relationship change? What is $\Delta^{1}f(x)$?
We have shown previously that $1+\Delta=\exp(hD)$, so it would be naturaly to write
However, the above righthand side corresponds to the function $1/(e^x1)$ which is singular at the origin, so instead we write
Here we have used the fact that
where $B_n$ are the Bernoulli numbers. Notice that with our notations, we have
and
therefore
Now apply $\Delta$ on both sides, we have
This is one version of the EulerMaclaurin formula. Finally, if we replace $x$ by $a,\;a+h,\;a+2h,\;\dots,\;a+(m1)h$, onebyone, and sum all the resulting formulae together, we get
where $b:=a+mh$. So here we are, this gives us the usual form of the EulerMaclaurin formula.
]]>But in the modern days, we no longer worry about ferocious animals. We now have much safer spaces that are tolerant of our mistakes. So we can afford to make more mistakes. In fact, making mistakes is how we learn.
Yet, people avoid mistakes because it makes you “look bad.” But there is no learning without making mistakes. So how can we gain the courage to learn without worrying about the mistakes we would make?
I think math has the answer.
In numerical analysis, we design algorithms that solve problems quickly and accurately. One major task when developing such an algorithm is to combat with numerical errors. If you open any classical numerical analysis textbook, you will likely see discussions of error analysis: how to eliminate or reduce roundoff errors, and how to prevent errors from propagating and contaminating the final results.
Talking about errors can sound bad and boring. It may drive beginners away from numerical analysis before they can learn the true beauty of the subject.
In fact, mathematicians know about the bad connotations associated with “errors.” When communicating with fellow mathematicians, we don’t talk about errors as often as one would imagine. For example, instead of saying “the algorithm makes smaller errors” we like to say “the algorithm converges faster.” And instead of “the algorithm prevents error propagation” we like to say “the algorithm is stable and robust.”
“Fast convergence” and “smaller errors” mean the same thing, but the framings are completely different. “Convergence” focuses on the positive effects and makes people excited. If you ask any numerical analyst what makes them love their field, I’ll bet you no one would say they love the error analysis; rather, they would tell you about the lightning computation speeds, the simple ideas behind powerful algorithms, the unexpected connections between ideas from different fields, and more.
Likewise, “making mistakes” gets its own bad rap. I propose replacing it with “speeding up” because making more mistakes allows you to learn quicker. Whenever you are stuck and can’t make any progress because you are afraid of making mistakes, try to tell yourself “I am just going to speed up my learning a little.”
Making mistakes can be embarrassing, but embarrassment is simply the cost of entry. The embarrassment you feel only exist in your mind, learning is what can actually happen. Once you allow yourself to look stupid and take on the beginner’s mindset, learning will be unstoppable.
Make more mistakes. It is okay to speed up a little, and it feels great.
]]>An $m\times n$ matrix $\mathbf{A}$ is lowrank if its rank, $k\equiv\mathrm{rank}\,\mathbf{A}$, is far less than $m$ and $n$. Then $\mathbf{A}$ has a factorization where $\mathbf{E}$ is a tallskinny matrix with $k$ columns and $\mathbf{F}$ a shortfat matrix with $k$ rows.
For example the following $3\times3$ matrix is of rank$1$ only.
Given a matrix $\mathbf{A}$, there are many ways to find $\mathrm{rank}\,\mathbf{A}$. One way is to find the SVD
where $\mathbf{\Sigma}=\mathrm{diag}(\sigma_1,\sigma_2,\dots)$ is an $m\times n$ diagonal matrix, whose diagonal elements are called the singular values of $\mathbf{A}$. Then $\mathrm{rank}\,\mathbf{A}$ is the number of nonzero singular values.
The SVD tells you the most important information about a matrix: the EckartYoung theorem says that the best rank$k$ approximation of $\mathbf{A}=\mathbf{U}\mathbf{\Sigma} \mathbf{V}^*$ can be obtained by only keeping the first $k$ singular values and zeroing out the rest in $\mathbf{\Sigma}$. When the singular values decay quickly, such a lowrank approximation can be very accurate. This is particularly important in practice when we want to solve problems efficiently by ignoring the unimportant information.
An interesting example is the $n\times n$ Hilbert matrix $\mathbf{H}_n$, whose $(i,j)$ entry is defined to be $\frac{1}{i+j1}$. $\mathbf{H}_n$ is fullrank for any size $n$, but it is numerically lowrank, meaning that its singular values decay rapidly such that given any small threshold $\epsilon$, only a few singular values are above $\epsilon$. For example with $\epsilon=10^{15}$, the $1000\times1000$ has numerical rank $28$.
Other examples of (numerically) lowrank matrices include the Vandermonde, Cauchy, Hankel, and Toeplitz matrices, as well as matrices constructed from smooth data or smooth functions.
As it turns out, a lot of the matrices we encounter in practice are numerically lowrank. So finding lowrank approximation (e.g. in the form of $\mathbf{A}=\mathbf{EF}$ at the beginning) is one of the most important and fundamental subjects in applied math nowadays.
Matrix sizes have been growing with technological advancements. Many common matrix algorithms scale cubically with the matrix size, meaning that even if your computing power grows 1000 times, you could only afford to solve problems that are 10 times bigger than before. These common algorithms include matrix multiplication, matrix inversion, and matrix factorizations (e.g. LU, QR, SVD). Therefore, it is important to speed up these matrix computation methods in order to fully exploit the ever growing computing power.
One major strategy to accelerate the computations is to exploit the data sparsity of a matrix. Data sparsity is a deliberately vague concept which broadly refers to the kind of internal structures in a matrix that can help make computations faster. Following are some common examples of datasparse matrices.
With these ideas above, plus a little coding experience with some simple rank structured matrices (a good place to start is with the first two of these tutorial codes), you are equipped with the “MAKE” that gets you ready for going on an advanture to the fast computations with matrices. All the details and other more advanced topics can be learned later once you dig far enough.
]]>A pandemic makes people anxious because it freezes life. A lot of activities have to be suspended. You can’t do what you would normally do. You don’t know how long the pandemic would last — probably a few months, probably a few years. All you can do is wait, indefinitely.
Some groups of people seem to be doing quite well in a lockdown situation. Mathematicians happen to be one such group.
Sophus Lie, who established Lie algebra during his imprisonment, said that “a mathematician is comparatively well suited to be in prison.” And Lie was not alone, 70 years later another great mathematician, André Weil, who also had a productive time in prison, wondered “if it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?”
According to the fun article Lockdown Mathematics, both Lie and Weil were mistaken for spies during wartimes “due to their strange habits as eccentric mathematicians who incessantly scribbled some sort of incomprehensible notes and wandered in nature without any credible purpose discernible to outsiders.”
There are many other interesting examples from that article. The bottom line is, many mathematicians have experienced highly productive periods during confinement situations, where they were free of distractions and could focus deeply on their thoughts; although in some cases, this effect only lasted for a month or two, eventually the productivity boost waned. After all, mathematicians are still humans who need some breaks.
I have to say that the above description perfectly summarized my experience during the COVID lockdown. I was super productive and wrote two papers during the first few months of the lockdown, then I started to get distracted and wanted to socialize again.
Now it is March again, an anniversary of the COVID lockdown in the US. Maybe I could also benefit from setting up a few months of faux lockdown for myself every year.
According to some recent research, getting in the state of flow might be one best way to cope with lockdown anxieties. When you are experiencing flow, you are deeply focused on something; time seems to slip by quickly, a few hours feel like just a moment to you.
Not just mathematicians, other people find their flow in all sorts of ways: painting, making handicrafts, reading books, writing essays, coding. So when you get into flow, not only that time passes much quicker, but you are also accomplishing something meaningful to you. It just completely flips the lockdown situation around and gives you a positive experience.
In fact, this experience may have deeper implications if we redefine “pandemic” as a state of mind:
A “pandemic” is an extended period during which you are constantly anxious about one thing that you can’t control.
By this definition, people are constantly undergoing all kinds of pandemics: losing jobs, struggling to graduate, accumulating debts, being homesick, feeling lonely. In each of these cases, people are stuck in a “personal pandemic” that they can’t easily escape and have to live with it for an uncertain amount of time.
Inspired by the lockdown experience, maybe one solution to a “personal pandemic” is to accept it like we accepted that we would be living under COVID for a few years, and turn to focus on something we truly care about. Oftentimes, the consequence of a “personal pandemic” is not as dire as one might think it would be; a threat might also be an opportunity for growth. So being able to find your flow could carry you through the most difficult time of your “personal pandemic,” giving you the chance to come out stronger and better off.
]]>I’d like to see different desires as being linearly independent.
Linearity. When you have many desires, they “superpose” to increase your level of anxiety. For example, if someone wants to be rich, be famous, to have a superb partner, and to be playing all day and not have to work … all at the same time, then this must be the most anxious person in the world.
Independence. Whether or not one desire is satisfied doesn’t affect your ability to satisfy another desire. For example, you don’t have to be rich to start doing things you like, you don’t have to be gorgeous to start building a good relationship, you don’t have to be the most intelligent person to be confident.
As a corollary, if you just give up most of your desires, you will stop being overly anxious and have much more energy to persue your most important desire.
My last year of PhD was an incredible year for me. But I actually failed to graduate the year before. I spent some hard time with myself in the summer and decided that I would try to achieve one big goal in my final year: to graduate as a competent scholar. I picked this goal because I wouldn’t be satisfied with just graduating and then spend my time working on some random job just to get by. I truely hoped to still be able to enjoy doing creative work after graduation.
To achieve this goal, my logic was plain and simple: I need to be productive to be competent for my work; I need to be healthy to be productive. So I started eating healthier and exercising more regularly.
Interestingly, as I became a healthier person, I also began to enjoy life. Over the year, besides focusing on my work, I also had a very supportive relationship and became a more grateful person, which in turn made me a more confident person and helped me build new connections. In the end, I had not only graduated, but also delivered a speech at the commencement and found a job I liked which allowed me to continue to do creative work.
I am sure a lot of these results were just luck, but it was fascinating how the world seemed to be coordinating to send me through an upward spiral.
I have learned two things from this experience:
(1) I don’t really need a lot of grit or perseverance to achieve something. All I need to do is let go of all the other desires, then I have plenty of mental power to work on the one big thing.
(2) Letting go of other desires doesn’t mean I won’t be doing all those things. It just means that I don’t see them as “goals.” For example, I was eating healthy food and exercising regularly only because I enjoyed them, I didn’t have to do them if I thought they made me feel worse.
There was one quote that I thought was very beautiful but which had never really believed:
When you want something, all the universe conspires in helping you to achieve it.
This was from the fiction The Alchemist by Paulo Coello. Behind this quote was one of the most romantic stories I have ever read. So romantic that I never thought reality would work that way.
However, since then I have had this “something” that I truely wanted for a few times (although each time it lasted only for a year or two). Each time I made up my mind, I had this peculiar feeling that the world did seem to be helping me out in unexpected ways. These experiences had pushed me to read between the lines. I am realizing that maybe this “something” in the quote has to be singular. It must be the one thing that you absolutely want during that period of your life, from which you won’t get distracted by anything else. In a world full of uncertainties, this one desire can be the very certainty that guides you through chaos and doubts.
What is the one desire that is truely important to you? It is okay that you are anxious about that one and only that one.
]]>Austin (and Texas) just went through a disaster. A winter storm devoured the city, turning it into a frozen world for a week. On Monday, January 15, 2021, temperature dropped below 10°C, coldest over the past century. Snow was thick enough to sink your entire boots in. While such cold weather is nothing special in northern states like Michigan, it is nothing the Texans have ever seen. The electricity infrastructures in Texas were absolutely unprepared for such extreme condition, leaving millions of people out of power throughout the week.
Unfortunately, I am one of the witnesses and sufferers in this historic event. To make things worse, both of the kitchen stove and hot water in my apartment rely on electricity, so we are out of every possible source of energy.
Long story short, we survived. And here is how.
First of all, I am thankful to have been staying with my girlfriend this year. The support and encouragement we provide to each other is the most important reason that we stayed positive and endured this hardship. It is true that a longterm relationship compounds its value over time and can be your lifesaver in unexpected events.
Second of all, getting helps from our neighbors was the silver lining during the week. After a day without power and heat, I started asking around for help on my phone. Eventually, I was connected to a friend’s friend who lived nearby and was fortunate enough to still have power in their building. It was also important that we could just walk there because you don’t want to drive on icy roads in such weather. So from Wednesday on, we could have something hot to eat for lunch and bring some hot water home. I sent food pictures to my family far away so that they knew I was doing fine.
Preparation also made a difference. We stocked up some food before the week because we paid attention to the weather forecast. This turned out to be one of our best decisions because later in the week we didn’t have to stand in the cold in long lines outside of HEB. Ironically, the freezing temperature helped preserve our food during the weeklong power outage.
Daily activities became simple. We went to bed before 9 PM because there is nothing much you can do in a cold dark night. We had more than 12 hours of sleep every day to save energy and stay warm. During the day, when we were not busy surviving, we spent most of our time reading which distracted us from the frigid reality. I also did exercises and lifted weights at home, which I think is an acceptable compromise for my 365day challenge since it was unsafe to be running outside.
The weather warmed up towards the weekend, finally above freezing on Friday. Our building got its electricity back on Friday afternoon and it had been more than 80 hours since we lost power.
Looking back, I think it is equally important to survive both the physical and mental challenges. I can’t imagine what it would be like if I were enduring all this all by myself and had no one to talk to. Staying calm and positive during a hard time can make a huge difference. Once you have hope, the physical challenges look easier to defeat. I wish everybody who suffered the power outage during this brutal week had found someone they can reach out to.
As the driving condition was improved, we went to stay at our friends’ house during the weekend. Because of COVID, it’s been a long time since we could spend a day or two with people we care about. I feel grateful that a harsh situation like this became the best opportunity to bond again.
]]>Thanks to last year’s COVID lockdown, I had the chance to concentrate on research without distractions. I was fortunate enough to make some discoveries, which I thought was a huge breakthrough. After submitting some papers, a few months later I found out that I have in fact overlooked some closely related work in the literature. One of my results looked less great given the existing work, although it was still a nice progress.
While this has been a humbling experience, it was also inspiring: I wouldn’t have been so optimistic if I knew how long a journey the pioneers have traveled; I might have lost my faith and given up early if I knew all the failed attempts by other people. It was exactly my ignorance that had given me the courage to attack the open problems and the hope to keep pushing. Luckily, I eventually bumped into paths and territories that others have overlooked which led to my destination.
As Alain Connes once wrote, the initial phase of making new math discoveries “requires a kind of protection of one’s ignorance.” Sometimes, ignorance “frees people from reverence for authority and allows them to rely on their intuition.” In the same spirit, Steve Jobs also told people to “stay hungry, stay foolish.” Perhaps all intellectuals, including academics and industrial innovators, can use some protections of ignorance.
We like to be wellprepared before going into a challenging adventure. But knowing all the failed journeys of other people, knowing that even some brave and strong peers had failed their missions, can actually paralyse you into inaction. In fact, exploring the math world doesn’t require your having “full knowledge” about any field. Nobody ever knew “enough.” Once you know a minimum amount of knowledge that allows you to survive, you can start your journey.
“Whatever the origin of one’s journey, one day, if one walks far enough, one is bound to stumble on a wellknown town: for instance, elliptic functions, modular forms, or zeta functions.” This is another quote from Alain Connes, which resonates with me deeply. There is no single path to knowledge, we can be free to explore our own paths and not be ashamed of knowing too little about all other possible paths. Be brave and keep pushing, once in a while we will meet with other adventurers in one of those famous mathematical towns.
]]>Consider the classical Laplace equation in a domain $\Omega\subset\mathbb{R}^2$, with Dirichlet boundary condition on the boundary $\Gamma:=\partial\Omega$, written as
The theory of integral equations solves a given boundary value problem like this by reformulating it into an integral equation on the boundary of the domain, such that the 2D spatial differential problem in $\Omega$ is reduced to a 1D boundary integral problem on $\Gamma$.
There are two main approaches to integral equations: the direct approach and the indirect approach.
1. Green’s representation (direct approach). The Green’s representation theorem says that the unknown function $u(x)$ can be expressed as
where
is the fundamental solution of the Laplace equation. Then by letting $x\to\Gamma$, this representation becomes an integral equation
where the function $\psi(y) = \frac{\partial u(y)}{\partial n_y}$ is the unknown Neumann data on $\Gamma$, and the $f(x)/2$ term is due to the socalled jump relation^{1}.
2. Potential theory (indirect approach). In potential theory, one starts by assuming that the solution has the form
where $\varphi$ is a “density function” on $\Gamma$. Then again by letting $x\to\Gamma$ and using the jump relation, we arrive at the integral equation
Comparing these two approaches, we see that the unknown function $\psi=\frac{\partial u}{\partial n}$ in the Green’s representation approach has clear physical meaning (e.g., if $u$ is the temperature, then $\psi$ is the heat flux at the boundary), hence the name “direct approach.” On the other hand, the unknown function $\varphi$ in the potential theory approach doesn’t have a direct physical meaning, therefore the name “indirect approach.”
In fact, there are two ways to make sense of this density function $\varphi$.
1. The charge density analogy. The name “potential theory” comes from the fact that the Laplace equation describes the gravitational potential or electrostatic potential in space. If $G(x,y)$ represents the electric potential generated by a point charge at $y$, then $\frac{\partial G(x,y)}{\partial n_y}$ is the potential generated by a dipole charge at $y$, hence $\varphi$ is the dipole charge density on $\Gamma$. Potential theory then generalizes the concept of charge density to other elliptic PDEs as well, terming $\varphi$ the density function for a variety of potentials. (E.g., velocity potential, traction potential, electromagnetic potential, etc.)
2. The jump of physical quantities. Another way to give meaning to $\varphi$ is to go through the process of how we arrived at an assumption such as $u(x) = \int_\Gamma \frac{\partial G(x,y)}{\partial n_y}\varphi(y)\,ds_y$. The key fact is that, with such an assumption, one is actually extending the solution $u(x)$ from $\Omega$ to the entire space $\mathbb{R}^2$ based on an underlying continuity assumption on $u$. Specifically, let’s assume a solution $U(x)$ for all $x\in\mathbb{R}^2\setminus\Gamma$, such that
i.e., $U$ is an extension of $u$ into the whole space by stitching together the field $u$ inside $\Omega$ and some unknown harmonic field $u_\mathrm{out}$ outside $\Omega$ . According to the interior and exterior versions of Green’s representation theorem, we have
and
where $u_\infty$ is a constant associated to $u_\mathrm{out}$ at $\infty$. Without loss of generality, let’s just assume $u_\infty=0$ and add the above two expressions together, this yields
If we assume that the normal derivative of $U$ is continuous across the boundary $\Gamma$, i.e. the normal derivative of $u$ matches that of $u_\mathrm{out}$ on $\Gamma$, then the first integral in the above representation vanishes. Therefore, defining the density $\varphi$ as
we recover the potential theoretic representation $u(x) = \int_\Gamma \frac{\partial G(x,y)}{\partial n_y}\varphi(y)\,ds_y$ for $x\in\Omega$.
In summary, the density function $\varphi$ physically represents the jump of the extended field $U(x)$ across the boundary $\Gamma$, assuming its normal derivative is continuous across $\Gamma$.
Based on the above idea of fieldextension, we consequently obtain an intuitive picture about the solvability relations between the interior and exterior boundary value problems:
1) When using potential theory to solve the exterior Neumann problem, we assume that the extended field $U(x)$ has matching Dirichlet data across the boundary, and then solve for the unknown jump of the Neumann data $\psi(y):=\frac{\partial u(y)}{\partial n_y}\frac{\partial u_\mathrm{out}(y)}{\partial n_y}$. Since we know that the interior Dirichlet problem is uniquely solvable, the exterior Neumann problem is also uniquely solvable (with some appropriate behavior at the infinity).
2) Likewise, the exterior Dirichlet problem is solved by matching the Neumann data of the extended field $U(x)$, and then solve for the jump of Dirichlet data across $\Gamma$. Because the solution of an interior Neumann problem is only unique up to an arbitrary constant, the naive potential assumption for the exterior field $u_\mathrm{out}(x) = \int_\Gamma \frac{\partial G(x,y)}{\partial n_y}\varphi(y)\,ds_y$ will result in an integral equation with a onedimensional nullspace. An additional condition is needed, besides the potential theoretic equation, in order to retrieve the uniquesolvability of the exterior Dirichlet problem using integral equations.
]]>One key ingredient of public communication is to know your audience well. I would like to propose a visual model that can be helpful for adjusting your presentation based on the type of your audience.
I call this model the Audience Lightcone:
The Audience Lightcone is a simple idea that goes as follows. Your audience is covered by a lightcone. To cover a broader audience with a bigger lightcone, you will need to move the light source to a higher level.
In this model, the lightcone coverage is the broadness/narrowness of your audience with respect to the idea you want to convey. The level of the light source is the level of your viewpoint when presenting the idea.
For example, if I were to present a piece of math I did to an audience, then for me the audience can be classified as follows, from broader to narrower:
(1) a general audience that only know very basic math
(2) people who are/were in math or science majors
(3) math researchers
(4) computational math researchers
(5) researchers who work on problems related to mine
If I have an audience that sits closer to the top of this list, then I will need to present my ideas from a higherlevel viewpoint. This means to provide more context and background stories, to build more bridges between what my audience might know and what I want to convey, to use more appropriate analogies, and to refrain from speaking technical jargons.
On the contrary, if I have a narrower audience, such as my colleagues and project collaborators, then I can quickly jump into the lowerlevel details without spending much time on explaining the context, because it should be our common knowledge.
The big challenge here, therefore, is how well can you still effectively communicate ideas as your audience is broadened? This is a hard task that requires tons of practices to do well. But I think the Audience Lightcone is a useful mental model that can remind you about one of the most important aspect of effective communcation.
]]>People who have learned numerical analysis all know about the Gauss quadrature rule: for any integer $n>0$, there exist $n$ nodes $x_1,x_2,\dots,x_n\in[1,1]$ and $n$ corresponding weights $w_1,w_2,\dots,w_n$, such that the approximation
is highly accurate for any function $f$ smoothly defined on $[1,1]$. The error of this approximation typically decays exponentially as $n$ increases. Together with scaling and shifting of variables, the Gauss quadrature efficiently handles all the regular integrals (i.e. integrals involving smooth integrands) on any interval $[a,b]\subset\mathbb{R}$.
What if the integrand is singular? How would you approximate an integral when the integrand is not smooth or even blowing up somewhere on the interval? It turns out that most of the singular integrals in practice can be handled by a few strategies (80/20 rule again!). Here they are:
1. Integration by parts. People in calculus classes may have heard the joke: “Integrate by parts whenever you’re not sure what to do.” This is sometimes true for singular integrals. For example:
then on the righthand side, the boundary terms evaluate to $0$ while the new integral is in fact regular, so applying the Gauss quadrature will be excellent.
2. Integration by substitution. Many singular integrals can be made regular after a change of variables. For example, substituting $x=\cos\theta$ in the integral below yields
then, assuming $f$ is smooth, the Gauss quadrature will just work for the second integral.
3. Product quadratures. When the integrand is a product of a regular function $f(x)$ and a singular function $w(x)$ (usually refered to as “weight function”), one can often design efficient quadratures of the form
For example, the Chebyshev quadrature works perfectly for the integral in the previous example where $w(x)=(1x^2)^{1/2}$. On the other hand, both the Gauss quadrature and the Chebyshev quadrature are particular instances from the Jacobi quadrature family. These example quadrature rules are derived based on the theory of orthogonal polynomials, where for each given weight $w(x)$, there is a family of polynomials ${g_0(x),g_1(x),\dots}$ such that any two different polynomials are “orthogonal” to each other in the sense that, for any $i\neq j$,
4. Extrapolation. The Romberg integration is an extrapolation method that can enhance the accuracy of quadrature rules define on equallyspaced nodes. As a simple example, consider the following trapezoidal rule approximation with spacing $h=\frac{\pi}{n}$
where the integrand is nonsmooth at $x=0$, and where $E(h)$ is the error that depends on $h$. It turns out that this error can be written as a power series of $h$:
The leading error will be canceled if one appropriately combine an $h$approximation with a $2h$approxiation:
The new error is of size $h^4$ which is much smaller than the original $h^2$. This procedure can be done multiple times to cancel the leading errors in the expansion one at a time, such that the accuracy is substantially improved.
5. Singularity cancellation. Some singularities are stronger than others. For example, $x\ln x$ is less singular than $\ln x$ because the former is bounded near $x=0$. Let’s consider the first example above again that integrates $e^x\ln x$. It is easy to see that the integrand near $0$ behaves like $\ln x$, so we can subtract $\int_0^1\ln x\,dx$ from the original integral and then add it back, that is
The integral $\int_0^1\ln x\,dx=1$ by a simple integration by parts. Then we can apply the $n$point Gauss quadrature to the remaining integral $\int_0^1(e^x1)\ln x\,dx$, the error decays roughly like $1/n^4$, much faster than the $1/n^2$ decay rate that you would get if applying the same quadrature to the original integral.
These five strategies, or a combination of them, cover almost all common ways to integrate singular function. One strategy could be more efficiently than another depending on the particular application of interest.
BOCTAOE. There are situations where none of the above strategies are efficient enough. Such marginal cases attract most of the research efforts. (80/20 rule again.) For example, here is one strategy that I recently took in my research:
6. Error correction. Sometimes, it may be necessary to find an explicit expression for the quadrature error $E(h)$, so that you have the power to develop more sophisticated methods tailored to your application of interest. To give an example, consider the integral of $f(x)=e^{x^2}\ln x$, its righthand Riemann sum approximation on $[0,6]$ with $n$ subdivisions is
where $h=\frac{6}{n}$, $\epsilon=\int_6^\infty f(x)\,dx<10^{16}$ is some small constant that is immaterial in practice. The error $E(h)$ has the following expansion:
where $\zeta(z)$ is the famous Riemann zeta function. Such formulae could take a lot of effort to derive in general. But once available, they allow you to develop highly efficient numerical methods.
]]>The reactionary approach seems to be a default setting of human behavior: people react only when something undesirable happens. Consequently, people have to react every single time something undesirable happens. This is apparently so in politics (especially prominent in 2020). But you can observe reactionary behaviors in all other domains.
For example, parents and teachers often react against technology. It’s quite typical to hear things like “Turn off the video game!” or “Put down your phone!” or “No electronics in class!”
Another common example is doing presentations. Now that most presentations are done with slides, you can find a lot of reactionary advice such as “Don’t put everything on one slide” or “Don’t include irrelavant pictures or animations” or “Don’t use illegible fancy fonts.”
Taking a reactionary approach is totally fine when the problem you are dealing with is simple (like replenishing your toilet paper) or a oneoff thing (like repairing a flat tire). For more complex and constantly evolving problems, such as education and technology, taking a reactionary approach is like playing a neverending whacamole game: you banned gaming, they turned to their phones; you banned phones, they turned to weed and drugs. It never solves the real problem. It is exhausting and can be harmful to your relationships. Maybe we can take a more systematic approach.
What makes me happy is that more and more creative people are taking on the task of improving education with technology.
An early example is the Khan Academy. Khan started by making videos to teach some of the school classes online. This seemingly simple approach enables what’s called the selfpaced learning: instead of being forced to sit and listen in class, students can pick their own time to watch the videos; instead of feeling embarrassed to ask questions, students can just replay the part where they were lost. The traditional way of learning is to attend lectures in class and do the homework after. But with the video lectures, students can now watch them at home and do the homework in class (i.e., tutoring). Learning becomes more engaging and more human.
This is what I’d like to call a creationary approach. I see “reactionary” and “creationary” as antonyms (and they are anagrams!):
Over the past decades, I am seeing more and more creationary people teaching math creatively with the help of technology. Just to name a few of them:
It is people like these who have given me a lot of hope and excitement about the future of education.
In our daily life, we often run into a whacamole lifestyle, too. People go on a diet to lose weight multiple times a year. People try to quit smoking many times in their life. People are making the same New Year Resolutions every year.
To take a creationary approach to life means to make choices based on what you are for rather than what you are against.
One trick I often use is to create connections between the things I love doing with the things I think I should do. For example, I enjoy doing exercises because I love listening to podcasts at the same time. I enjoy eating healthy because healthy foods are delicious and it allows me to focus on my research problems for longer without getting fatigue easily. Once these connections become my habits and instincts, doing the “hard” thing feels effortless.
Of course, taking a creationary approach is hard. It is not always easy to know what you are for; it can be very hard to find your purpose; being creative may not always give you the desired result. But I also believe that creativity is something that you can train for: be aware that you can always choose creation over reaction, allow yourself more chances to experiment and to fail, learn from creative people and enjoy their company. Sooner or later, you will encounter your own serendipity.
]]>I have noticed that there are a lot of similarities between building a startup company and building an academic career:
Disclaimer: I am writing these as an exercise for myself to think things through. If you are a junior academic searching for career advice, please take them with a grain of salt.
In the business world, people know the economy of scale. But Graham pointed out that at the early stage of a startup, it is actually important to focus on things that don’t scale. Here are a few pieces that I think are also valuable for academic people. I will state the original advice from Graham, then map it into the academic context.
The roadmap of startups or academic careers is multidimensional: you strive to do novel things, but you also need to do unscalably laborious things, especially at the early stage. Graham suggested to think of this multidimensional task as a vector rather than a scalar. I made the following diagram to illustrate this idea:
Some people (including myself) tend to focus solely on doing research and hope everything else will take care of themselves eventually (path A in the diagram), but in reality we often need to also do the unscalable things in the other dimensions, such as to improve our skills to present, to communicate, to collaborate, and to help others (path B in the diagram). After all, scientists are humans; career goals are social, not just personal. Building a career is not just about personal gains, but also about helping others and making the world around us better.
]]>At the beginning of December after reading a blog post, I finally decided to do a 365Day Challenge for the year of 2021: running every day for at least 15 minutes a day. Then I decided to start right away even if it was still 2020, because why not. This may sound like a joke to some people because, seriously, how is this even a “challenge”?
As someone who has done a few shortterm challenges, I know that there are always something unexpected in the way that can ruin your plan. If you do a 30Day Challenge, one most important point is not that how easy or hard it is to do it on a given day, but that if you skipped even one day, you failed. When a challenge lasts as long as 365 days, there are plenty of bad things that could just happen, say, on the 113th day and then all the things you did during the first 112 days are wasted.
For my own challenge, I have imagined days when I will be too busy and stressful meeting a deadline, or days when the weather is just too bad, or days when I have been driving all day and feel too tired to do anything, or days when I have traveled to a strange place that could be unsafe to run outside. I am sure there are many more situations that I just can’t imagine yet.
Since 2018, I have done a few shortterm challenges – sometimes with a purpose and sometimes just to bring some new ingredients into life. I started with a 30Day Vegan challenge; then I went on to do a 30Day Pescatarian challenge which wound up lasted almost 5 months. Now I am back to omnivorous, but with more knowledge about different dietary options and how it feels to be a practitioner. In general, I become more mindful with how different foods affect my body and mind, and I am eating much healthier than before the dietary challenges.
One thing that was unexpected while eating vegan was the experience of being a minority. Growing up in China I have never been a member of a minority group. After I came to the US, I did meet racist strangers in a few occasions throughout these years, but in general I am in academic environments that have always been welcoming and never made me feel like a minority. But during my 30 days of eating vegan, the restaurants I went to often made me feel that I am a trouble to them through their reactions, including some major restaurants that do provide vegetarian options. In order to feel more comfortable, every time I decided to eat out I had to do more investigations and select appropriate restaurants. Most of the time I just cook my own food. (On the good side, I did discover some vegan friendly cuisines, like Indian and Mediterranean, which are very enjoyable.)
I also did a 30Day Swimming challenge for which I swam for an hour a day everyday. I did it because there was a year or two during my time at Michigan when I stopped exercising completely. I remember it was one sunny day in the winter in early 2018, I finally went for a swim after a long time. I can never forget the moment when I pushed open the door and came out of the gym, a breeze of cool air flowed into my chest. “How refreshing!” I said to myself. That was a feeling I had missed for so long. At that day, I started and completed the swimming challenge. Then I was motivated enough that I continued to do exercise, either swimming or running for at least an hour, 5 to 6 times every week, until the pandemic hit in the March of 2020.
There is one important lesson I learned from making these small changes in life: you must reinforce these changes by associating them with immediate rewards; never rely on grit and pervseverance since those are guaranteed to wane overtime and you are bound to fail that way; losing weight is such a reward that comes so late that most people with that goal fail. Here are some examples of immediate rewards that worked for me:
I have stopped being active for 9 months since the COVID lockdown in March. So finally at the beginning of December, I decided to do the 365Day Challenge for the year of 2021 and also started running again right away.
All went well for almost a month. Then there came December 31th of 2020, the day that I have so many perfect excuses to skip it.
It was cold and rainy the whole day (even raining hail in the afternoon); I had invited a few people for a gathering that night and had to do grocery shopping during the day; I had planned to work a little on my research in the morning; most “importantly,” it was still 2020 and my 365Day Challenge did not “officially” start until the next day.
Given all the circumstances, this is a day I have imagined from the beginning. I decided that if I skip this day, I won’t be able to convince myself that I can still complete every single day of the following 365 days. So I kept an eye on the weather forecast and just went out once the rain temporarily stopped. It was freezing and windy outside, the street was empty and quiet. Shortly after I began running, it started to rain again. When I got home, my face and hands were numb but I was very happy. Interestingly, the podcast I was listening to that day talked about building habits.
Now, my 365 days begin. Thanks to that rainy day, I am quite confident about completing it this year while working on many different things.
Today, it snowed in Austin! Going out for a run is actually not hard for me at this point – I think I have successfully built this into my habit in the past 30 days.
I have taken some photos of the snow. Although such snow is no big deal for someone who has lived in Michigan, it is definitely a rare event in Texas!
]]>This has been a special year because everybody said that it was bad. The COVID pandemic suddenly exploded and completely changed our way of living. People may still remember that this is the same year when Kobe Bryant died from an accident, but it feels like this happened an eternity ago — as with everything else that happened before lockdown.
The pandemic seemed “inevitable” in hindsight. For decades people like Nassim Taleb and Bill Gates had been warning about such an event as being a necessary outcome of our world getting ever more connected. Yet, when it finally happened, it has hit us so quickly and strongly that nobody was prepared. Early on, many people wishfully thought that it would just disappear quickly and magically somehow; then people thought that it might only last for a few months; eventually, people are disappointed to find out that we end up practicing the lockdown lifestyle for the rest of 2020 and may well be doing it for a good part of 2021. Currently, the first wave of vaccines arrived and are being produced and distributed; however, a second wave of outbreak is also hitting hard on the US, Europe, and many other places in the world. The virus has mutated and we still don’t see a clear way out of this extended battle.
This is the mainstream and social media’s version of “This has been a special year.” Everybody has been living an extraordinarily bad year according to this narrative. We are under the constant bombardment of bad news.
I would like to take a break for a moment, and remind myself that nobody’s life is the same. After all, we are individuals living our own life and having our own experience. It is true that everybody was hit by the pandemic one way or another, but it cannot be true that we all have the same experience and feelings.
We can’t control what comes at us, but we can decide how we perceive and respond to those events. (And it is probably wise to cut oneself off from the media most of the time, too.)
So I would like to write a second version of “This has been a special year” that is particular to myself, because surely everybody has their own special year.
This has been a special year for me, because it is meaningful to myself in many ways.
At the beginning of the COVID pandemic, my parents from China were very worried about my situation in the US. I assured them that I was fine and had things to be busy with; I remember I explicitly told my parents: “This pandemic will one day come to an end and become a past memory. I don’t want to wake up that day just to realize that I have wasted my time in the entire year of 2020, doing nothing but doomscrolling through bad news all day long.” I would like to thank my past self for delivering this powerful message and staying focused on the more important and meaningful things.
To be fair though, I did take the COVID seriously from early on (maybe even a little paranoid). Back in January when everything was unclear about the disease, and when people in China were still planning to celebrate the Lunar New Year, I already bought my parents some masks and told them to stay away from crowds and wash hands. I recall that just a few days after I bought those masks for my parents, all the masks were sold out on Taobao or any other Chinese ecommerce platforms. No joke, it was frightening. After the pandemic hit the US, “don’t let my parents worry for me” had been one of my biggest motivations to stay healthy. I very much hope that the situation would end soon in 2021 that I can be meeting with my family and friends and traveling again.
I have not updated my blog for a year now and I would like to get back to writing a little more often. I originally planned to start writing after the New Year. But the lockdown experience made it clear to me that many concepts (such as the “new year”) are merely human constructs — they can be sometimes a mental obstruction when you want to start something new. So, to break the “wait after the new year” mantra, I decided to start writing before the new year, making this a part of 2020 as well. (I am sure my future self will thank me for this decision.) I hope that in 2021, I can become a better writer, which also means a better thinker.
]]>室友办公桌的几个插口没电，前天报修了。今天一大早回来，我发现办公室门竟敞开着，走到门口前，只见室友的办公桌前赫然坐着两位老头在聊天。跟他们聊了几句，原来是来做维修的电工，为了修这几个电插口颇费了一番工夫。
看起来年纪没那么大的老头说，这个插头不行是因为天花板上的线有问题；天花板上的线有问题呢，又是因为办公室另一头墙里的线有问题；这些问题会发生，可能就是上一个负责线路的人，因为某种原因线没接好就放着不管了。
我听他解释一大轮，只感到不明觉厉。我问，这肯定很难才诊断出来吧？他指着另外一个很老的老头说：“花了一个多小时才找出原因，都是他的功劳，他叫 George。” 我不禁肃然起敬，有种遇见少林扫地僧的感觉。我说：“非常感谢，我一定转告我的室友！”
他们走后，我一边回味一边感叹，这不就是一次电工版的 debug 吗？前人坏习惯留下的 bug，让后来的优秀程序员几经波折才修好。可见遵从 best practice 的程序员是多么可贵。
]]>One thing I wanted to make sure is that I can still update this blog, which is hosted using Octopress. But to be honest, until recently I am not really good at using git and GitHub, so it took me some time to figure out what I need to do to resume blogging with Octopress.
Logically, the key concept with Octopress is that it uses a ghost branch. All the source files for my blog is tracked NOT by the master branch, but by a ghost branch (named “source” under the Octopress framework) that is completely unrelated to the master branch in terms of git history – it’s a hanging branch that lives in a parallel world albeit in the same repo, hence the name “ghost branch.”
Another important thing with Octopress is to find out how the local blog is deployed onto GitHub Pages. This also took me a little bit to figure out. It turns out that my blog is first generated under a _deploy
folder, and then that folder is synced to the master branch on the repo.
Summarizing the two paragraphs above, Octopress is organizing my blog in the following way:
1 2 3 4 

Apparently, it is not the best practice to use a ghost branch, also not the best practice to use the master branch to track a subfolder of the ghost branch. These are some twisted logic puzzles for the mind. But that’s how Octopress works!
Once the logic is clear, I only need to do the following three things to resume blogging:
1. Clone source from ghost branch (my ghost branch named source
)
1


2. Clone GitHub Page (to a subdirectory _deploy
) from master branch
1 2 3 

3. Reinstall the plugins for my blog
1


Of course, you need to have installed the Bundler to use the bundle
command, and to do that on a Mac you will probably need to first install rbenv for smoothly running your blog by executing Ruby commands. In other word, you will need to set up your environment again for Octopress, which I have covered in my first blog post (in Chinese).
rbenv
on MacOS CatalinaAfter I upgraded my MacOS to 10.15 Catalina, I encountered troubles again. This time, when I do rake
commands, I got the error message:
1


So I need to update the bundler
to version 2 or higher.
However, when I did gem install bundler
, I got permission error again – this means that gem
is trying to install bundler on Mac’s default system ruby, in other words, rbenv
is broken.
So I tried to reinstall rbenv
:
1 2 

and then install a custom ruby
(version 2.6.5):
1


However, in this last step, I constantly ran into troubles. It was a long debugging process, so I am just going to list the hurdles that I have gone through, which maybe helpful to others.
Hurdle #1: openssl
overhead
rbenv
keep trying to first install openssl
before actually installing ruby 2.6.5
, this is taking a lot of time and greatly reducing my debugging efficiency. I checked which openssl
and confirmed that my system has already had openssl
installed. So I need to tell rbenv
to use my system’s openssl
. I do this with the command:
1


Hurdle #2: incorrect C compiler
After many fails, I finally found out in the error log that rbenv
is using CC=x86_64appledarwin13.4.0clang
as C compiler and therefore has not been able to compile and install ruby. This can be resolved by explicitly telling rbenv
to use the system compiler /usr/bin/gcc
.
1


Hurdle #3: anaconda gets in the way
After the previous two fixes, I am still unable to install ruby. This time I found the following lines in the error log:
1 2 

It turns out that rbenv
is using anaconda’s ar
function instead of the system’s own ar
. So I need to disable anaconda in my system as follows:
~/.bash_profile
file, remove/comment out any lines that have to do with anaconda/conda1


Finally, I was able to install ruby on rbenv
! Notice that in the last command I have removed CC=/usr/bin/gcc
, because disabling anaconda also resolves the problem in Hurdle #2.
After installing a custom ruby, I need to first tell the system to use that version of ruby
1


Then I can go back to my Octopress blog folder and install the newest bundler
1 2 

However, there is still a final small hurdle for me:
Hurdle #4: rake version
When I tried to create a new blog post via rake new_post['title']
, I got the following error message:
1 2 

So I just open the Gemfile
in my Octopress folder, find the line with gem 'rake', '~> 10.5'
, change it to gem 'rake', '~> 12.3'
, and then bundle install
again. Now it’s all set.
Scenario: you finish working with an SSH session, you close your laptop to go for lunch or for a tea. Then you come back and open your laptop wanting to resume your job, but the connection is broken or the VPN breaks and forced you to disconnect. You have no choice but setup the connection once again and reopen the documents/apps before you can resume from where you left.
Solution: with GNU Screen, you can directly resume from where you left, without having to reopen all the documents/apps you’ve been working on.
The concept of GNU Screen is like turning on and off of a computer screen. You turn off a screen when you are done, let the computer to hang in there; you come back later, turn the screen back on and continue your tasks.
ssh t <server.domain.name> screen R
ctrla d
to detach from the session. This also disconnect with the SSH server.ssh t <server.domain.name> screen R
again to reconnect, then you are right where you left it.ctrld
).Limitation: If you are using apps with graphical interface, not just the command line environment, then screen
is not suitable for resuming such jobs, and VNC instead would be the ideal choice.
Note: this note is about University of Michigan computing resources.
People working on numerical analysis often need to develop efficient code by doing multilanguage programming, e.g. MATLAB/C++ or MATLAB/Fortran. Linux operating system is needed to have the best programming experience. (Mac OS has all sorts of compatibility issues.) For UM people who are using a Mac, I have explore the following four options for remotely accessing Linuxbased computing resources.
The ITS SCS provides the easiest way to certain computational resources, no extra permissions/purchases needed. But the software may not be uptodate, e.g. MATLAB version on this server is relatively old (MATLAB R2012b).
Steps:
ssh Y uniqname@scs.dsc.umich.edu
in terminal.matlab
to run the program. XQuartz graphics will be invoked.For math people, we can contact the East Hall Technical Service (EHTS) for help. They offer newer software. (The EHTS is one of the four regional support desks of the LSAIT).
Steps:
vulpix.math.lsa.umich.edu
server.The UM ComputerAided Engineering Network (CAEN) provides the smoothest experience for general users as well as power users. The CAEN computer operates the newest Linux and Windows systems, with newest and most complete software libraries for all sorts of computational work. But these are only conveniently available to engineering students.
Availability:
Steps:
This option builds connections using VNC (instead of XQuartz) is faster and more stable, can disconnect and resume right where you left at any time.
Info about Flux:
Steps:
之所以想到美元的问题，是因为今天玩编程的时候研究了一下 64 位的整数 Int64，看最大能表示多少位数字。Int64 是一种专门用来表示整数的数据类型，用到 64 位的 0 和 1 二进制数。因为有一位要表示正负号，剩下 63 位可以表示最大的数是 $2^{63}1 \approx 9.22 \times10^{18}$，是一个 19 位的数字。
有趣的是维基百科里面有这个数字的页面，其中有一栏提到了这么一则新闻，说 PayPal 在 2013 年的时候曾经系统出错了一次，给一位客户的账户打了 “$92 quadrillion（9.2 万兆美金）” 。这是一个 17 位数，加上小数点后 2 位，一共刚刚好 19 位，就是最大的 Int64 整数 —— 所以 PayPal 是以 Int64 数据型来记账的，这是一次数据溢出的操作。
我们看到用 Int64 来记账的话，如果小数点后留两位，整数位大约可以记到 9.2 万兆美元的级别，这比起现在全世界流通的 1 兆多的美元高了五个数量级，还是绰绰有余的。
进一步，这个故事告诉我们 PayPal 上的金额最小单位是 1 美分。所以美分就是最小的美元单位了吗？于是跑到美元的维基页面瞧了一下，发现并不是 1 美分。现行最小的美元单位是 1 mill （千分之一美金），小数点后三位。这并没有让我吃惊，我真正吃惊的是，里面有一段这样的说明：
... “mill” is largely unknown to the general public, though mills are sometimes used in matters of tax levies, and gasoline prices are usually in the form of \$X.XX9 per gallon, e.g., \$3.599, more commonly written as \$3.59 ⁹⁄₁₀.
读到这里，我不禁惊叹：这个东西我每天都见到，只不过我又理所当然地把它忽略了。
这段话在说什么呢？说的是我们平时去加油站时看到的价格板，上面的价格都是精确到小数点后三位的，像是这样：
有些天天见的东西，我们也都以为自己知道那是什么意思，直到疑问被提出来。几千年前，苏格拉底式的提问让当时的人开始审视自己的知识，发现自己以往认识的界限。今天我自己经历了一番这样的感受。
–
1) 如果保留数点后三位而不是两位，那 Int64 还是能表示到千兆美元的级别的，没有太大影响。而且既然 PayPal 只保留两位小数，我有理由相信金融行业实际操作都是用同样的数据贮存方式的。这样保持了业界金融软件的一致性，可以节省成本。而需要用到 mill 这个单位的少数行业，只需要额外记录一位数字就好了。
2) 视而不见（英文可以说 to look without seeing it 吗？）让我想起了那首 50 多年前的经典的歌曲：
People talking without speaking
People hearing without listening
People writing songs that voices never share
And no one dared
Disturb the sound of silence.
其实这首歌在美国已俨然成为一个恶搞文化的符号了。当一个人在某个情境之下想起不堪回首的过去，陷入沉思与回忆，背景音乐就会响起：“Hello darkness my old friend …” 。网上有大量恶搞视频，都挺搞笑的。
3) 写这篇文章的时候，需要把英文计数单位翻译成中文。比如 billion 翻成十亿，trillion 翻成一兆。在这个过程中，我发现“不可思议”竟然也是一个单位，一个不可思议等于 1 后面加 64 个零。这个单位的起源是一个佛教用语，形容神通很大的意思。真是够不可思议的！
]]>