Blog 1

Random Talk on Random Thoughts

Can't Go to Facebook

| Comments |

Tonight, I just go to Facebook as usual, and I’ve just received an error saying that “server not found” in Firefox. The screenshot below illustrates the problem.

</source> can't go to Facebook

Found 2 Incomparable Topologies

| Comments |

  1. Lower limit topology ($\R_l$)

    • Basis: $\{[a,b) \mid a,b \in \R\text{ s.t. } a < b\}$
  2. K-topology ($\R_K$)

    • Basis: $\{(a,b), (a,b) - K \mid a,b \in \R\text{ s.t. } a < b \}$, where $K = \{ 1/n \mid n \in \Z_+ \}$.

$\R_l \nsubseteq \R_K$

Consider a base element $[a,b)$. At the point $a$, no open interval $(c,d)$ containing $a$ is a subset of $[a,b)$.

$\R_K \nsubseteq \R_l$

Let $B_2 = (-1, 1) - K$. At $B_2 \notin \R_l$ because any base element $[0,b)$ containing 0 must hit $1/n$ for some $n \in \Z_+$ by Archimedean Property of $\Z_+$. Thus, $\forall b > 0, [0,b) \nsubseteq B_2$.

Play Youtube Videos on Old Version Firefox

| Comments |

Problem

</source> old Firefox can't browse YouTube?

I went to YouTube with the old version Firefox.

Solution

Request YouTube’s HTML5 player and enjoy watching videos.

</source> require YouTube's HTML5 player </source> tree gun

Lessons learnt

Just like *nix, <Alt-PrtSc> can be used for capturing only the current window instead of the whole screen.

Understood Euler Product Formula

| Comments |

Before I understood this formula

When I was a high school student, it’s hard for me to imagine a product whose index looped through all prime numbers because primes don’t appear in a regular way: between 1 and 100, there’re 25 primes, but between 900 and 1000, there’re 14.

Even though it’s easier to imagine the infinite sum whose $n$-th term is $n^{-s}$, without learning the $p$-Test and the Comparison Test for the convergence of infinite sums, I couldn’t understand why the infinite sum in the following equality exists.

\[ \sum_{k = 1}^{\infty} \frac{1}{k^s} = \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}} \]

A heuristic way to understand it

  1. Note that the geometric series converge (absolutely).
  2. Borrow the proofs about the Comparison Test and the $p$-Test to convince yourself that the infinite sum on the LHS of the formula is well-defined.
  3. Know something about convergent infinite products.1
  4. Convince yourself that the infinite product on the RHS exists.2
  5. Recall the Fundamental Theorem of Arithmetic.

I think that the last item is the trickiest step. Writing the following lines, I understood this equation.

\begin{align} & 1^{-s} + 2^{-s} + 3^{-s} + \cdots \\ =& (1^{-s} + 3^{-s} + 5^{-s} + \cdots) (1^{-s} + 2^{-s} + 2^{-2s} + \cdots) \label{step1} \\ =& (1^{-s} + 3^{-s} + 5^{-s} + \cdots) \cdot \frac{1}{1 - 2^{-s}} \label{step2} \\ =& (1^{-s} + 5^{-s} + 7^{-s} + \cdots) (1^{-s} + 3^{-s} + 3^{-2s} + \cdots) \cdot \frac{1}{1 - 2^{-s}} \label{step3} \\ =& (1^{-s} + 5^{-s} + 7^{-s} + \cdots) \cdot \frac{1}{1 - 3^{-s}} \cdot \frac{1}{1 - 2^{-s}} \label{step4} \end{align}

Steps \eqref{step1} (resp. \eqref{step3}) holds because for each $k^{-s}$ in the leftmost bracket, powers of 2 (resp. 3) can be taken out from $k$. In steps \eqref{step2} and \eqref{step4}, the formula for the sum of geometric series is applied.

Refining the above thoughts

I’ll end this post by wrapping up the above ideas by summation and product signs.

\[ \begin{aligned} & \sum_{k = 1}^{\infty} \frac{1}{k^s} \\ =& \sum_{k \in \N} \frac{1}{k^s} \\ =& \prod_{p \text{ prime}} \sum_{k \in \N} \frac{1}{p^{s k}} \\ =& \prod_{p \text{ prime}} \sum_{k = 1}^{\infty}\frac{1}{p^{s k}}\\ =& \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}} \end{aligned} \]

  1. One may refer to the convergence criteria of infinite products on Wikipedia 

  2. There’s more than one way to do it. When I tried to do this for the first time, I used the Mean Value Theorem on logarithms to establish a standard result.

    \[ \frac{x}{1 + x} < \log(1 + x) < x \quad \forall\, x > 0 \]

An Image With Changeable Size

| Comments |

Background

I sometimes upload pictures for illustrating ideas. You may see Two Diagrams Illustrating the Isomorphism Extension Theorem in this blog for example.

Motivation

Enable users to adjust the size of SVG graphics.

Problem

In the linked post, if the zoom level is too large, then part of the image will be hidden.

Your browser does not support SVG

Valid SVG 1.1

This is quite inconvenient for users who want a whole picture.

Goal

To set up a slide bar which controls the size of an SVG image.

Image size: 200

Your browser does not support SVG

Valid SVG 1.1

Fbi vs Fim

| Comments |

This is just a small table listing some differences between fbi and fim.

  Advantages Disadvantages
fbi support SVG files doesn’t support tmux
doesn’t have full control over the zooming size
fim support tmux
support custom zooming1
doesn’t support SVG files

To view SVG images in tmux buffers, one can use ImageMagick’s convert command.2

Lessons learnt

While writing the above table, I ran into the problem of a Markdown table with more than one line. Luckily, searching “kramdown table lines” on Google, I quickly found a Stack Overflow question which solved my problem.3 Note that <br> is not the best way: add a slash / to suppress the following messages.

Warning: The HTML tag 'br' on line 15 cannot have any content -
auto-closing it
Warning: The HTML tag 'br' on line 17 cannot have any content -
auto-closing it
Warning: The HTML tag 'br' on line 1 cannot have any content -
auto-closing it
Warning: The HTML tag 'br' on line 1 cannot have any content -
auto-closing it

  1. By :nn% 

  2. By convert in.svg out.jpg 

  3. Newline in markdown table? on Stack Overflow. 

Looping Through Lines

| Comments |

Problem

I copied lines from an enumerated list in this blog and pasted it to Vim. The content in each list item was seen in the current Vim buffer, but not the numbers.

An illustration of the problem

A sample ordered list with 2015 items

  1. Item one
  2. Item two
  3. Item three

  1. Item 2015

What is seen in Vim after copy and paste

Item one
Item two
Item three
...
Item 2015

If one writes in Markdown and he/she copies a numbered list from elsewhere to a text editor, then it will be very inconvenient to manually add back the numbers. To exaggerate this inconvenience, I put “2015” above.

Goal

Insert the item number at the beginning.

1. Item one
2. Item two
3. Item three
...
2015. Item 2015

Compared Two Poisson Variables

| Comments |

Background

Last Friday, I had to submit a homework which required me to evaluate $\Pr(A > B)$ and $\Pr(A = B)$, where $A$ and $B$ were two independent Poisson random variables with parameters $\alpha$ and $\beta$ respectively.

Problem

I then started evaluating the sum.

\[ \Pr(A > B) = \sum_{i = 1}^\infty \sum_{j = 0}^{i - 1} \frac{e^{-\alpha} \alpha^i}{i!} \cdot \frac{e^{-\beta} \beta^j}{j!} \]

Then I was stuck. I couldn’t compute this sum also.

\[ \Pr(A = B) = \sum_{i = 0}^\infty \frac{e^{-(\alpha + \beta)} \alpha^i \beta^i}{(i!)^2} \]

Fact

I googled for a solution for hours, and after I saw equation (3.1) in a paper, I gave up finding exact solutions.1 As a supporter of free software, I avoided using M$ Ex*, and wrote a program in C++ to approximate the above probabitities by directly adding them term by term.

Source code

Sample output

Assume that Poisson r.v. A and B are indepedent
Parameter for A: 1.6
Parameter for B: 1.4
Number of terms to be added (100 <= N <= 1000): 8
P(A > B) = 0.423023, P(A < B) = 0.335224, P(A = B) = 0.241691

Lessons learnt

  1. A one-line method for writing the content of a function which returns the factorial of a number.

    URL: http://progopedia.com/example/factorial/

  2. Evaluation of a function inside GDB

    URL: http://stackoverflow.com/q/1354731/


  1. Keller, J. B. (1994). A characterization of the Poisson distribution and the probability of winning a game. The American Statistician, 48(4), 294–298. 

Calculating the Volume of a Triangular Pyramid in a Hard Way

| Comments |

Background

There is an easy way of calculating the volume of $\{(x,y,z) \in \R^3 \mid 0 \le x,y,z \le t, x + y + z \le t\}$: just consider the permutation of $x,y,z$.1 This can be easily generalized to $n$ dimension.

Another way using multiple integrals

\[ \begin{split} & \text{Let } A:= \left\{ (x_1,\ldots,x_n) \in \R^n \,\middle|\, 0 \le x_i \le 1 \,\forall 1 \le i \le n, \sum\limits_{i = 1}^n x_i \le 1 \right\}. \\ & \text{Volume of } A \\ =& \idotsint\limits_A \ud x_n \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 1} x_i} \ud x_n \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 2} x_i} \left(t - \sum\limits_{i = 1}^{n - 1} x_i\right) \ud x_{n - 1} \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 2} x_i} x_{n - 1} \ud x_{n - 1} \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 3} x_i} \frac{1}{2!} \left(t - \sum\limits_{i = 1}^{n - 2} x_i\right)^2 \ud x_{n - 2} \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 3} x_i} \frac{x_{n - 2}^2}{2!} \ud x_{n - 2} \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \int_{0}^{t} \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 4} x_i} \frac{1}{3!} \left(t - \sum\limits_{i = 1}^{n - 3} x_i\right)^3 \ud x_{n - 3} \cdots \ud x_3 \ud x_2 \ud x_1 \\ & \vdots \\ =& \int_{0}^{t} \frac{(t - x_1)^{n - 1}}{(n - 1)!} \ud x_1 \\ =& \frac{t^n}{n!} \end{split} \]

Why use this method?

To calculate its centre of mass.

\[ \begin{split} & \text{First component of its centre of mass} \\ =& \frac{n!}{t^n} \idotsint\limits_A x_1 \ud x_n \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \frac{n!}{t^n} \int_{0}^{t} x_1 \int_{0}^{t - x_1} \int_{0}^{t - x_1 - x_2} \cdots \int_{0}^{t - \sum\limits_{i = 1}^{n - 1} x_i} \ud x_n \cdots \ud x_3 \ud x_2 \ud x_1 \\ =& \frac{n!}{t^n} \int_{0}^{t} x_1 \,\frac{(t - x_1)^{n - 1}}{(n - 1)!} \ud x_1 \\ =& \frac{n!}{t^n} \left( \left. -x_1 \,\frac{(t - x_1)^{n - 1}}{(n - 1)!} \right|_{0}^t + \int_{0}^{t} \frac{(t - x_1)^n}{n!} \ud x_1 \right) \\ =& \frac{n!}{t^n} \left( 0 + \frac{t^{n + 1}}{(n + 1)!} \right) \\ =& \frac{t}{n + 1} \end{split} \]

By symmetry, we conclude that the center of mass is $\left(\frac{t}{n + 1}, \frac{t}{n + 1}, \ldots, \frac{t}{n + 1}\right) \in \R^n$.


  1. Simplex. (2015, March 29). In Wikipedia, The Free Encyclopedia. Retrieved 15:34, April 6, 2015, from http://en.wikipedia.org/w/index.php?title=Simplex&oldid=654074423 

Calculate the Expected Waiting Time in a Hard Way

| Comments |

Background

If one assumes that servers $X_1$ and $X_2$ has exponential service times with rate $\lambda_1$ and $\lambda_2$ respectively, (i.e. $X_i \sim \Exp(\lambda_i), i = 1,2$), then one can follow the standard arguments and say that the waiting time $\min\left\{ X_1, X_2 \right\} \sim \Exp(\lambda_1 + \lambda_2)$, so the expected waiting time is $1/(\lambda_1 + \lambda_2)$.

Problem

I tried finding the expected waiting time by conditioning on $X_1 - X_2$.

\begin{equation} \begin{split} & \Pr(X_1 > X_2) \\ =& \int_{0}^{\infty} p_{X_1}(x_1) \Pr(X_2 < x_1) \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 e^{-\lambda_1 x_1} (1 - e^{-\lambda_2 x_1}) \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 e^{-\lambda_1 x_1} \ud x_1 - \int_{0}^{\infty} \lambda_1 e^{-(\lambda_1 + \lambda_2) x_1} \ud x_1 \\ =& 1 + \left. \frac{\lambda_1}{\lambda_1 + \lambda_2} e^{-(\lambda_1 + \lambda_2) x_1} \right|_{0}^{\infty} \\ =& 1 - \frac{\lambda_1}{\lambda_1 + \lambda_2} \\ =& \frac{\lambda_2}{\lambda_1 + \lambda_2} \end{split} \label{eq:pr_min} \end{equation}

Similarly, one has $\Pr(X_1 \le X_2) = \lambda_1/(\lambda_1 + \lambda_2)$.

\begin{equation} \begin{split} & \E[\left\{ X_1, X_2 \right\}] \\ =& \E[\min\left\{ X_1, X_2 \right\} \mid X_1 > X_2] \Pr(X_1 > X_2) \\ +& \E[\min\left\{ X_1, X_2 \right\} \mid X_1 \le X_2] \Pr(X_1 \le X_2) \\ =& \E[X_2] \Pr(X_1 > X_2) + \E[X_1] \Pr(X_1 \le X_2) \\ =& \frac{1}{\lambda_2} \frac{\lambda_2}{\lambda_1 + \lambda_2} + \frac{1}{\lambda_1} \frac{\lambda_1}{\lambda_2 + \lambda_1} \\ =& \frac{2}{\lambda_1 + \lambda_2} \end{split} \label{eq:wrong} \end{equation}

This is different from what we expect. What’s wrong with the above calculation?

Solution

I really thought about the meaning of $\E[\min\left\{ X_1, X_2 \right\} \mid X_1 > X_2]$, and find out that this conditional expectation won’t be helpful because

\[ \E[\min\left\{ X_1, X_2 \right\} \mid X_1 > X_2] = \frac{\int_{0}^{\infty} \int_{0}^{x_1} x_2 \ud x_2 \ud x_1}{\Pr(X_1 > X_2)} \]

Actually, one can divide it into two halves.

\begin{equation} \begin{split} & \E[\min\left\{ X_1, X_2 \right\}] \\ =& \int_{0}^{\infty} \int_{0}^{x_1} x_2 \lambda_1 \lambda_2 e^{-\lambda_1 x_1 - \lambda_2 x_2} \ud x_2 \ud x_1 \\ +& \int_{0}^{\infty} \int_{0}^{x_2} x_1 \lambda_2 \lambda_1 e^{-\lambda_2 x_2 - \lambda_1 x_1} \ud x_1 \ud x_2 \end{split} \label{eq:head} \end{equation}

By observing the symmetry between the subscripts ‘1’ and ‘2’ in the above equation, we only need to evaluate one of them.

\begin{equation} \begin{split} & \int_{0}^{\infty} \int_{0}^{x_1} x_2 \lambda_1 \lambda_2 e^{-\lambda_1 x_1 - \lambda_2 x_2} \ud x_2 \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 \lambda_2 e^{-\lambda_1 x_1} \left( \int_{0}^{x_1} x_2 e^{-\lambda_2 x_2} \ud x_2 \right) \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 \lambda_2 e^{-\lambda_1 x_1} \left( \left. -x_2 \cdot \frac{e^{-\lambda_2 x_2}}{\lambda_2} \right|_{x_2 = 0}^{x_2 = x_1} + \int_{0}^{x_1} \frac{e^{-\lambda_2 x_2}}{\lambda_2} \ud x_2 \right) \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 \lambda_2 e^{-\lambda_1 x_1} \left( -x_1 \cdot \frac{e^{-\lambda_2 x_1}}{\lambda_2} - \left. \frac{e^{-\lambda_2 x_2}}{\lambda_2^2} \right|_{x_2 = 0}^{x_2 = x_1} \right) \ud x_1 \\ =& \int_{0}^{\infty} \lambda_1 \lambda_2 e^{-\lambda_1 x_1} \left( -x_1 \cdot \frac{e^{-\lambda_2 x_1}}{\lambda_2} + \frac{1 - e^{-\lambda_2 x_1}}{\lambda_2^2} \right) \ud x_1 \\ =& \int_{0}^{\infty} -\lambda_1 x_1 e^{-(\lambda_1 + \lambda_2) x_1} \ud x_1 + \int_{0}^{\infty} \frac{\lambda_1}{\lambda_2} (e^{-\lambda_1 x_1} - e^{-(\lambda_1 + \lambda_2) x_1}) \ud x_1 \\ =& \lambda_1 \left( \left. \frac{x_1 e^{-(\lambda_1 + \lambda_2) x_1}}{\lambda_1 + \lambda_2} \right|_{0}^{\infty} - \int_{0}^{\infty} \frac{e^{-(\lambda_1 + \lambda_2) x_1}}{\lambda_1 + \lambda_2} \right) \\ +& \frac{\lambda_1}{\lambda_2} \left( \left. -\frac{e^{\lambda_1 x_1}}{\lambda_1} \right|_{0}^{\infty} + \left. \frac{e^{-(\lambda_1 + \lambda_2) x_1}}{\lambda_1 + \lambda_2} \right|_{0}^{\infty} \right) \\ =& \lambda_1 \left( 0 + \left. \frac{e^{-(\lambda_1 + \lambda_2) x_1}}{(\lambda_1 + \lambda_2)^2} \right|_{0}^{\infty} \right) + \frac{\lambda_1}{\lambda_2} \left( \frac{1}{\lambda_1} - \frac{1}{\lambda_1 + \lambda_2} \right) \\ =& -\frac{\lambda_1}{(\lambda_1 + \lambda_2)^2} + \frac{1}{\lambda_2} - \frac{\lambda_1}{\lambda_2 (\lambda_1 + \lambda_2)} \end{split} \label{eq:half_int} \end{equation}

Similarly, one has

\begin{equation} \begin{split} & \int_{0}^{\infty} \int_{0}^{x_2} x_1 \lambda_2 \lambda_1 e^{-\lambda_2 x_2 - \lambda_1 x_1} \ud x_1 \ud x_2 \\ =& -\frac{\lambda_2}{(\lambda_2 + \lambda_1)^2} + \frac{1}{\lambda_1} - \frac{\lambda_2}{\lambda_1 (\lambda_2 + \lambda_1)}. \end{split} \label{eq:half_int2} \end{equation}

Substitute \eqref{eq:half_int} and \eqref{eq:half_int2} into \eqref{eq:head}.

\begin{equation} \begin{split} & \E[\min\left\{ X_1, X_2 \right\}] \\ =& \left( -\frac{\lambda_1}{(\lambda_1 + \lambda_2)^2} + \frac{1}{\lambda_2} - \frac{\lambda_1}{\lambda_2 (\lambda_1 + \lambda_2)} \right) \\ +& \left( -\frac{\lambda_2}{(\lambda_2 + \lambda_1)^2} + \frac{1}{\lambda_1} - \frac{\lambda_2}{\lambda_1 (\lambda_2 + \lambda_1)} \right) \\ =& -\left( \frac{\lambda_1}{(\lambda_2 + \lambda_1)^2} + \frac{\lambda_2}{(\lambda_2 + \lambda_1)^2} \right) + \left( \frac{1}{\lambda_1} + \frac{1}{\lambda_2} \right) \\ -& \left( \frac{\lambda_1}{\lambda_2 (\lambda_1 + \lambda_2)} + \frac{\lambda_2}{\lambda_1 (\lambda_2 + \lambda_1)} \right) \\ =& -\frac{1}{\lambda_1 + \lambda_2} + \frac{\lambda_1 + \lambda_2}{\lambda_1 \lambda_2} - \frac{\lambda_1^2 + \lambda_2^2}{\lambda_1 \lambda_2 (\lambda_1 + \lambda_2)} \\ =& -\frac{1}{\lambda_1 + \lambda_2} + \frac{(\lambda_1 + \lambda_2)^2 - (\lambda_1^2 + \lambda_2^2)}{\lambda_1 \lambda_2 (\lambda_1 + \lambda_2)} \\ =& -\frac{1}{\lambda_1 + \lambda_2} + \frac{2\lambda_1 \lambda_2}{\lambda_1 \lambda_2 (\lambda_1 + \lambda_2)} \\ =& -\frac{1}{\lambda_1 + \lambda_2} + \frac{2}{\lambda_1 + \lambda_2} \\ =& \frac{1}{\lambda_1 + \lambda_2} \end{split} \label{eq:fin} \end{equation}

This is consistent with what we expect. I finally understand what’s wrong in \eqref{eq:wrong}: $X_1$ isn’t independent from $X_1 - X_2$.

Generalization

By induction, we can generalize \eqref{eq:fin} to the expected waiting time for $n$ servers in parallel: if $X_i \sim \Exp(\lambda_i) \,\forall 1 \le i \le n$, then

\[ \E[\min\left\{ X_1, \ldots, X_n \right\}] = \frac{1}{\sum\limits_{k = 1}^n \lambda_k}. \]