Blog 1

Random Talk on Random Thoughts

The $p$-th Root of a Convex Homogeneous Function of Degree $p$

| Comments |

Backgroud

I was stuck at a question from a convex analysis book for more than a week. I couldn’t directly find its solution.

Let $K$ be a convex cone in $\R^n$, $p \ge 1$. The function $f: K \to [0,\infty)$ is homogeneous of degree $p$, i.e.

\[ f(\lambda x) = \lambda^p f(x),\quad \forall x \in K, \forall \lambda \ge 0. \]

Suppose also that $f(x) > 0 \,\forall x \in K \setminus \{x\}$. Show that $g(x) := [f(x)]^{\frac1p}$ is convex on K.

Resolution

I saw a proof for the Minkowski inequality using the convexity of the function $x \mapsto x^p$ on the webpage for a real analysis course. In fact, by replacing $\norm{\cdot}$ with $g(\cdot)$ in the proof of Theorem 7, we’re done.

First, observe that

\begin{equation} g(\lambda x) = [f(\lambda x)]^{\frac1p} = \lambda g(x),\quad \forall x \in K, \forall \lambda \ge 0 \label{homo} \end{equation}

Therefore, it suffices to show that

\begin{equation} g(x + y) \le g(x) + g(y). \label{goal} \end{equation}

The assumption $\norm{\vect{u}}_p = \norm{\vect{v}}_p = 1$ can be changed to $g(x) = g(y) = 1$. Then we’ve $f(x) = f(y) = 1$. By the convexity of $f$ on $K$, $\forall x,y \in K, \forall \lambda \in [0,1]$,

\begin{equation} f(\lambda x + (1 - \lambda) y) \le \lambda f(x) + (1 - \lambda) f(y) \le 1. \end{equation}

Since it’s given that $f(x) \ge 0 \,\forall x \in K$, we can say that

\begin{equation} g(\lambda x + (1 - \lambda) y) \le 1. \label{trick} \end{equation}

Finally, for any $u,v \in K$, we let

\begin{equation} \begin{alignedat}{2} \lambda &{}= \frac{g(u)}{g(u) + g(v)} &{} \qquad x &{}= \frac u{g(u)} \\ 1 - \lambda &{}= \frac{g(v)}{g(u) + g(v)}&{} \qquad y &{}= \frac v{g(v)}. \end{alignedat} \label{coeff} \end{equation}

Making use of \eqref{homo}, we substitute \eqref{coeff} into \eqref{trick}.

\begin{align*} g \left( \frac{g(u)}{g(u) + g(v)} \frac u{g(v)} + \frac{g(v)}{g(u) + g(v)} \frac v{g(v)} \right) &\le 1 \\ g \left(\frac u{g(u)+g(v)} + \frac v{g(u)+g(v)} \right) &\le 1 \\ g \left(\frac{u + v}{g(u) + g(v)} \right) &\le 1 \\ g(u + v) &\le g(u) + g(v) \end{align*}

We get \eqref{goal} at last.

A Queue in $T_1$ Spaces

| Comments |

To find a finite subcover of a countable open cover $\left\{U_n\right\}_{n \in \N}$ of a $T_1$ space $X$ which has the Bolzano–Weierstrass property, one has to pick a sequence in $X$ in order to apply the Bolzano–Weierstrass property. In order to build a link between the sequence with the countable open cover, I picked a point $x_n$ from each $U_n$, and then build an infinite set $A$ consisting of $x_n \forall n \in \N$. I knew that $A$ is compact, but what does that do with the countable open cover?

Reading my notes again, I realised the trick of choosing $x_q$: it shouldn’t be chosen from $U_k$ with index $k$ which is smaller than $n_q$. In other words, $x_q \in U_{n_q}$ and $x_q \notin U_k \forall k < n_q$. If we failed to make such choices, that means we don't have enough $U_k$'s and $x_q$'s, meaning that $X$ is the finite union of $U_k$'s and it's countably compact. Otherwise, we continue this process to get an infinite set $A$ can be set up in the same way: $A := \left\{x_q \in X : q \in \N \right\}$. One will see that $A$ has no cluster point, contracdicting the Bolzano–Weierstrass property of $X$.

Assume that $x \in X$. $\exists m \in \N$ so that $x \in U_m$ and $x \notin U_j \forall j < m$. I was puzzled at the line $n_N < m \le n_{N + 1}$. Using the analogy of "queuing", I understood what's going on.

</source> </source> a queue of open sets in the open countable cover

Since $X$ is $T_1$, which is equivalent to the closeness of singletons, deleting finitely many singletons from an open set won't change the openness of the set. We have a neighbourhood $V := U_m \setminus \left\{x_j : j \le N \text{ and } x \ne x_j \right\}$ of $x$ which intersects $A$ at no more than one point. Thus, there exists a deleted neighbourhood $V \setminus \{x\}$ of $x$ so that $V \cap A \setminus \{x\} = \varnothing$. Hence, $\forall x \in X, x \notin A'$.

Mountain Climbing in Lindelöf $T_3$ Spaces

| Comments |

Motivation

To deduce that compact $T_2$ spaces are $T_4$, I tried using the arguments of the proof in the previous post, but I couldn’t do it. Therefore, I read another proof about the normality of regular Lindelöf spaces instead.

The proof at first glance

Suppose that $X$ is the given space, and $H$ and $K$ are the given sets. $\left\{G_n^* : n \in \omega\right\}$ and $\left\{W_n^* : n \in \omega\right\}$ are open countable subcovers of $H$ and $K$ respectively. On p.91 in Davis's Topology book, I saw the following equations.

\begin{alignat*}{2} U_0 &= G_0^* &\quad V_0 &= W_0^* \setminus \overline{U_0} \\ U_1 &= G_1^* \setminus \overline{V_0} &\quad V_1 &= W_1^* \setminus (\overline{U_0} \cup \overline{U_1}) \\ U_2 &= G_2^* \setminus (\overline{V_0} \cup \overline{V_1}) &\quad V_2 &= W_2^* \setminus (\overline{U_0} \cup \overline{U_1} \cup \overline{U_2}) \\ \vdots \\ U_n &= G_n^* \setminus \bigcup_{k < n} \overline{V_k} &\quad V_n &= W_n^* \setminus \bigcup_{k \le n} \overline{U_k} \\ U &= \bigcup_{n = 0}^\infty U_n &\quad V &= \bigcup_{n = 0}^\infty V_n \end{alignat*}

I didn’t know what to do when I first saw these equalities. It’s hard to understand the reason of $U \cap V = \varnothing$ by just reading the symbols. Luckily, an analogy of climbing mounts is given. Therefore, I finally understood how the open subsets $U_n$’s and $V_n$’s are built step-by-step while preserving disjointness and openness.

Normal Compact $T_2$ Spaces

| Comments |

To show that any compact Hausdorff space is $T_4$, one may first show that it’s $T_3$. To see this, using set theory notations may be quite difficult. For mediocre students, the contents of the following seciton may sound unnatural.

A sketch of the proof

Suppose that we want to separate a point $x \notin A$ and a closed set $A$ in a compact $T_2$ space $X$ by two disjoint open sets $U$ and $V$ so that $x \in U$ and $A \subseteq V$. A standard proof is to apply the $T_2$ property of $X$ to $x$ and each $y \in A$ so as to yield two disjoint open sets $U_y,V_y \in \mathcal{T}$ such that $x \in U_y$ and $y \in V_y$. Since $A$ should be contained in an open set $V$, in other words, an open cover of $A$ is needed, one might be tempted to construct the following union of open sets.

\begin{equation} V = \bigcup_{y \in A} V_y \label{Vinf} \end{equation}

However, one can’t ensure that the following infinite intersection of open sets is open.

\begin{equation} U = \bigcap_{y \in A} U_y \label{Uinf} \end{equation}

For instance, by the Nested Interval Theorem,

\[ \{0\} = \bigcap_{n \in \N} \left( -\frac{1}{n},\frac{1}{n} \right). \]

Thus, one applies the compactness of $X$ to get finite versions of \eqref{Vinf} and \eqref{Uinf}.

A snapshot of this fact

Since it’s so difficult to remember every detail of the proof, I love illustrating it using a picture.

snapshot capturing central idea of proof

How can a compact regular space be regular? See my next post.

MathJax Local Configuration File (2)

| Comments |

Background

In my opinion, the default display of the $\rm \LaTeX$ command for $\Re z$ and $\Im z$ don’t look good.

default Im(z) display

Problem

I want to reset the \Im command in my local MathJax configuration file source/javascripts/MathJaxLocal.js. I used a script inside the <body> tag to load my local MathJax configurations from GitHub.1 If I want to test the configurations, then I’ll have to push my changes to GitHub before I can see the effect and figure out the errors — this is incredibly slow and inefficient.

How can I locally debug my MathJax configurations?

Read Lucas's Theorem

| Comments |

Suppose that $P(z)$ is a polynomial in the complex plane. The theorem says that all zeros of $P’(z)$ are inside a half plane in which all zeros of $P(z)$ lie.

Ahlfors says that a directed line $z = a + bt$ determines a right half plane consisting of all points with $\Im(z - a) / b < 0$… (see Chap. 1, Sec. 2.3)1 After drawing the figure for the drawing the figure for $z = -(1 + i)t$, I realized that I should pay attention to the word “directed” and interpret “right” as “to the right of the vector $b$ drawn on the line $z = a + bt$”.

The following equation puzzled me for a while. (see Chap. 2, Sec. 1.3)2

\begin{equation} \Im\left( \frac{z - \alpha_k}{b} \right) = \Im\left( \frac{z - a}{b} \right) - \Im\left( \frac{\alpha_k - a}{b} \right) > 0 \label{stuck} \end{equation}

I tried sketching a diagram to understand what’s going on, but this isn’t so helpful. In fact, the above equation starts from $z - \alpha_k = (z - a) - (\alpha_k - a)$.

The whole proof makes use of a half plane $H := \{z \in \C \mid \Im[(z - a) / b] < 0\}$ in which all zeros $\alpha_1,\ldots,\alpha_n$ of $P(z)$ lie, and it follows the logic of proof by contradiction: each $\alpha_k$ is assumed to be in $H$ while $z$ isn't. It ends with the conclusion that $\Im(b P'(z) / P(z)) < 0$ by \eqref{stuck} and the equation

\[ \frac{P'(z)}{P(z)} = \sum_{k = 1}^n \Im\left( \frac{1}{z - \alpha_k} \right). \]

Finally, I saw a corollary of this theorem: all zeros of $P’(z)$ are inside the smallest convex polygon in which all zeros of $P(z)$ lie.


  1. Ahlfors, L. (1979). Complex analysis. Auckland: McGraw-Hill. 

  2. ibid

Understood How Zorn's Lemma Implies the Axiom of Choice

| Comments |

Many math books that I’ve read referred me to other books for the proof of the equivalence of the Axiom of Choice and Zorn’s Lemma. This afternoon, I spent more than two hours to understand the proof that the later implies and former in Topology written by Davis.

  1. Set up a non-empty partially ordered set $(\mathcal{P},\le)$
  2. Let $\mathcal{T}$ be any non-empty chain in $\mathcal{P}$.
  3. Prove that $\cup \mathcal{T} \in \mathcal{P}$.
  4. Apply Zorn’s Lemma to get a maximal element $g \in \mathcal{P}$.
  5. Use the maximality of $g$ to claim that the domain of $g$ equals the family of non-empty subsets $(S_i)_{i \in I}$ from which elements $(x_i)_{i \in I}$ are chosen.

In the book, $\le$ means function extension, and

\[ \mathcal{P} = \{f \mid f \text{ is a function, } dom\,f \subseteq (S_i)_{i \in I}, f(x) \in x \,\forall x \in dom\,f\}. \]

Step (3) is proved step-by-step according to the definition of $\mathcal{P}$. Usually, suppose that $(x_1,y_1),(x_2,y_2) \in f$, $x_1 = x_2 \implies y_1 = y_2$. The book uses the contrapositive form of this statement. I was stuck at the sentence Now, for a set to be an element of $dom\cup\mathcal{T}$, it must be an element of some member of $\mathcal{T}$. At first, I omitted the phrase "some member of", and stopped for half an hour. Reading the next sentence Hence $dom\cup\mathcal{T} \subseteq (S_i)_{i \in I}$, I knew how to interpret the sentence where I was stuck: if $S \in dom\cup\mathcal{T}$, $\exists f \in \mathcal{T}$ such that $S \in dom\,f$. Since $f \in \mathcal{T} \subseteq \mathcal{P}$, $dom\,f \subseteq (S_i)_{i \in I}$. Then $S \in (S_i)_{i \in I}$, and thus $dom\cup\mathcal{T} \subseteq (S_i)_{i \in I}$.

Revised Absolute Convergence

| Comments |

I am not so satisfied with this the following definition.1

\[ e^{i\theta} := \cos \theta + i \sin \theta \]

I remembered the proof for convergence of

\[ e^{x} := \sum_{i = 0}^{\infty} \frac{x^i}{i!} \]

for real numbers. I didn’t know if this can be extended to complex numbers. Therefore, I thought about the absolute convergence of complex-valued series. It’s expected that many proofs are similar to their real counterparts, such as the result that absolute convergence implies convergence. In real numbers, this result makes use of the Triangle Inequalty and Cauchy Convergence Criterion, and the key step is

\[ \abslr{\sum_{k = m + 1}^{n} a_k} \le \sum_{k = m + 1}^{n} \abs{a_k}. \]

Since the proof of the above statement for real numbers requires Bolzano–Weierstrass Theorem, which is about the sequential compactness of sequences of real numbers, I was stuck at this point.

Finally, I read another book, which said that if $(z_n)$ is a Cauchy sequence, and $\forall n \in \N$, $u_n := \Re(z_n)$ and $v_n := \Im(z_n)$, $\forall \varepsilon > 0, \exists N \in \N$ such that $\forall m,n \le N$,

\begin{align*} \abs{u_n - u_m} &\le \abs{z_n - z_m} < \varepsilon, \\ \abs{v_n - v_m} &\le \abs{z_n - z_m} < \varepsilon. \end{align*}

Then $(u_n)$ and $(v_n)$ are real-valued Cauchy sequences, which are convergent.2 This guarantees the convergence of $(z_n)$ in the complex plane.

To establish the absolute convergence of $\exp z$, we need the root test. The proofs can be borrowed from their counterparts in the set of real numbers. Ahlfors leaves the proof for $\sqrt[n]{n!} \to \infty$ to readers. I find Dan’s proof pretty easy.


  1. Brown, J. W., Churchill, R. V., & Lapidus, M. (1996). Complex variables and applications (Vol. 7). (pp. 17). New York: McGraw-Hill. 

  2. Ahlfors, L. (1979). Complex analysis. (pp. 34). Auckland: McGraw-Hill. 

Found an Upper Bound for the Modulus of Legendre Polynomials

| Comments |

Lemma

Suppose that $w: [a,b] \to \C$ is piecewise continuous, then we have

\begin{equation} \abslr{\int_{a}^{b} w(t) \ud t} \le \int_{a}^{b} \abs{w(t)} \ud t. \label{lemma} \end{equation}

The trick is to treat the integral on the L.H.S. of \eqref{lemma} as a number.

\begin{align} \int_{a}^{b} w(t) \ud t &= r_0 e^{i\theta_0} \text{ for some } r \in \R \text{ and } \theta_0 \in [-\pi,\pi) \label{trick1}\\ r_0 &= \int_{a}^{b} e^{-i\theta_0} w(t) \ud t \label{trick2} \end{align}

Then from \eqref{lemma} and \eqref{trick1}, it suffices to show that

\begin{equation} r_0 = \abslr{r_0 e^{i\theta_0}} \le \int_{a}^{b} \abs{w(t)} \ud t. \label{new_goal1} \end{equation}

Since \eqref{trick2} consists of $e^{-i\theta_0}$, which is absent on the R.H.S. of \eqref{lemma} and \eqref{new_goal1}, we add it back. Thus, we need to show

\begin{equation} r_0 \le \int_{a}^{b} \abslr{e^{-i\theta_0} w(t)} \ud t. \label{new_goal2} \end{equation}

Looking at \eqref{trick2} again, we’re almost there! Using elementary properties of complex numbers and definite integrals for real-valued functions and the fact/definition that

\begin{equation} \Re\left(\int_{a}^{b} w(t) \ud t\right) = \int_{a}^{b} \Re[w(t)] \ud t, \label{Re-int} \end{equation}

we can write

\begin{equation} \begin{aligned} r_0 &= \Re\left(\int_{a}^{b} e^{-i\theta_0} w(t) \ud t\right) \\ &= \int_{a}^{b} \Re[e^{-i\theta_0} w(t)] \ud t\\ &\le \int_{a}^{b} \abslr{e^{-i\theta_0} w(t)} \ud t. \end{aligned} \label{fin} \end{equation}

Therefore, \eqref{new_goal2} is satisfied, so as \eqref{lemma}.

An upper bound for Legendre polynomials

I have never evaluated

\begin{equation} P_n(x) = \frac{1}{\pi} \int_0^\pi (x + i \sqrt{1 - x^2} \cos{\theta})^n \ud\theta, \label{pnx} \end{equation}

where $-1 \le x \le 1$ and $n = 0,1,2,\dots$

Applying \eqref{lemma}, one gets

\begin{equation} \begin{aligned} P_n(x) &= \frac{1}{\pi} \int_0^\pi (x + i \sqrt{1 - x^2} \cos{\theta})^n \ud\theta\\ &\le \frac{1}{\pi} \int_0^\pi \ud\theta\\ &= 1 \end{aligned} \label{crux} \end{equation}

because

\begin{equation} \begin{aligned} &\quad\; \abs{x + i \sqrt{1 - x^2} \cos{\theta}}\\ &= \sqrt{x^2 + (1 - x^2) \cos^2{\theta}}\\ &\le \sqrt{x^2 + (1 - x^2)}\\ &= 1. \end{aligned} \label{norm-ub} \end{equation}