A little math quiz

rpenner thank you for your exemplary comments and corrections. By morning I could see the 'wrong quadrant' but left it hoping someone else at my level might have a go. Of the cube root... http://mathworld.wolfram.com/CubeRoot.html give +(1/2) +i√3/2 ... by an arcane method I got your answer but didn't want to argue with Mathworld.
If you'll check the MathWorld page again, that's the cube root of -1, not +1.
 
It is the tensor, or outer, product of vector spaces. It works (in general) like this.....

Suppose $$V,\,W$$ be vector spaces and that $$V^*,\,W^*$$ their dual spaces.

Then define a mapping $$V \otimes W: V^* \times W^* \to \mathbb{R}$$, and call every $$v \otimes w \in V \otimes W$$ a tensor. So that $$v\otimes w((\varphi,\psi))=v(\varphi)w(\psi) = \alpha \in \mathbb{R}$$. This is called a bilinear form.

Sticking my neck out slightly, and noting that for $$z = x +yi \in \mathbb{C}$$ and its complex conjugate $$\overline{z} = x - yi$$, and further noting that $$z(\overline{z})$$ is a Real number, I am going to suggest that the complex conjugate plays the role of the dual space here where $$\mathbb{C}$$ is considered as a vector space.

So I suggest the mapping $$\mathbb{C} \otimes \mathbb{C}:\mathbb{\overline{C}} \times \mathbb{\overline{C}} \to \mathbb{R}$$ to mimic the above.

Just guessing, I know little about complex-valued tensors.
 
Thanks, QuarkHead for that post. What I thought I could do (I could have it wrong) is take the matrix product of two complex numbers written as 2 x 2 real matrices.

So that means doing this (I think): Let $$ w = \begin{pmatrix} a & b \\ -b & a \end{pmatrix},\, z = \begin{pmatrix} x & y \\ -y & x \end{pmatrix} $$.

Then $$ w \otimes z = \begin{pmatrix} a & b \\ -b & a \end{pmatrix} \otimes \begin{pmatrix} x & y \\ -y & x \end{pmatrix} = \begin{pmatrix} ax & ay & bx & by \\ -ay & ax & -by & bx \\ -bx & -by & ax & ay \\ by & -bx & -ay & ax \end{pmatrix} $$
 
Last edited:
I thought about how can I think of a + bi as a one-dimensional object? Well, suppose we have a + bx and say it's a degree 1 polynomial?
When x = i, do we then have the same object, but over i (sorry, think I mean in i, since a,b are real)?

I understand $$ \mathbb C $$ is naturally a vector space and that you can map the electromagnetic fields in an electronic circuit to voltages and currents as rotating vectors in $$ \mathbb C $$, which is I suppose the complex plane with a time-dependent period(ic function). Rotating vectors are also called phasors, but generally you have some input waveform (with Fourier decomposition) and a system response at the outputs, which is frequency-dependent.
 
Last edited:
I thought about how can I think of a + bi as a one-dimensional object?
You need to treat complex numbers as first class citizens to think of $$\mathbb{C}^n$$ as an n-dimensional complex vector space.

In such a system, $$\mathbb{C}^1$$ is the unit vector 1 times arbitrary complex scalars.

In such a system, $$\mathbb{C}^2$$ admits many different complex bases:

(1,0), (0,1) ; (1,0), (0,i) ; (1, i)/√2, (1, -i)/√2 ; etc.

((a + d)/√2+i (b - c)/√2) (1, i)/√2 +((a - d)/√2+i (b + c)/√2) (1, -i)/√2 = (a+i b) (1,0) + (c + i d) (0, 1)
 
Last edited:
Let $$ w = \begin{pmatrix} a & b \\ -b & a \end{pmatrix},\, z = \begin{pmatrix} x & y \\ -y & x \end{pmatrix} $$.

Then $$ w \otimes z = \begin{pmatrix} a & b \\ -b & a \end{pmatrix} \otimes \begin{pmatrix} x & y \\ -y & x \end{pmatrix} = \begin{pmatrix} ax & ay & bx & by \\ -ay & ax & -by & bx \\ -bx & -by & ax & ay \\ by & -bx & -ay & ax \end{pmatrix} $$
Um, I think you need to transpose your $$z$$ matrix. Why?

Take the following with a pinch of salt - a large one - as I have had a few beers.....

Again suppose $$V$$ a vector space and $$V^*$$ its dual space. Then by the definition of dual spaces we will have that, for all $$v \in V$$ and any $$\varphi \in V^*$$ we will have that $$\varphi(v) \in \mathbb{R}$$.

In the case that $$V$$ is finite-dimensional only, we may also have that $$v(\varphi) \in \mathbb{R}$$ - the proof of this the is not easy, so take my word for it.

Now define an inner product on $$V$$ by $$<v,w> \in \mathbb{R}$$ (other notations are in use, and this is not my preferred one).

Then, in this case, i.e. in the case of an inner product (aka metric) space, we may have a privileged element in $$V^*$$ such that, say $$\varphi_v(w) = <v,w>$$ (some writers invert the order - it's not important).

Likewise $$v(\psi_w)= <v,w> \in\mathbb{R}$$.

But if I write $$v$$ as a column vector, then $$\psi_w$$ is a row vector, which is just the transpose of the column vector that is $$w$$.

This is certainly true for metric spaces,whether it generalizes I am not sure.

I'll have another beer and think about it.

Hic
 
I'm way out of my depth here (obviously) but... doesn't |a+ib|=1 have a degree of freedom that |a+b|=1 clearly lacks?
 
I know that in $$ \mathbb C^2 $$, vectors are written $$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \alpha, \beta \in \mathbb C $$.

So, $$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \otimes \begin{pmatrix} \gamma \\ \delta \end{pmatrix} = \begin{pmatrix} \alpha\gamma \\ \alpha\delta \\ \beta\gamma \\ \beta\delta \end{pmatrix}$$.

But, $$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix}^{ \intercal} \begin{pmatrix} \gamma \\ \delta \end{pmatrix} = \begin{pmatrix} \overline{\alpha} & \overline{\beta} \end{pmatrix} \begin{pmatrix} \gamma \\ \delta \end{pmatrix} $$
 
Last edited:
So let
$$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \in \mathbb C^2 $$

$$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \alpha\begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta \begin{pmatrix} 0 \\ 1 \end{pmatrix} $$.
But $$ \Biggl\{\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix} \Biggr\}$$, is a basis of $$ \mathbb R^2 $$. If the second equation above is scalar multiplication can we write $$ \mathbb C \times \mathbb R^2 $$ and call it $$ \mathbb C^2 $$?

1) No, because the second equation is 'scalar' multiplication by a pair of complex numbers.
2) No, because to get an element of $$ \mathbb C^4 $$ you tensor two elements of $$ \mathbb C^2 $$, and this generalises.

QuarkHead said:
Um, I think you need to transpose your z matrix. Why?
Been thinking on that, and I haven't been able to think why, except it might be connected to odd n, in $$ \mathbb C^n $$, so option 2) above has a problem . . .
 
Last edited:
Confused2 said:
I'm way out of my depth here (obviously) but... doesn't |a+ib|=1 have a degree of freedom that |a+b|=1 clearly lacks?
. |a+ib|=1 only when b = 0, so a = {-1,1}; |a+b|=1 has infinite solutions.
 
. |a+ib|=1 only when b = 0, so a = {-1,1}; |a+b|=1 has infinite solutions.
Incorrect:

$$ | a+ ib | = 1 \Leftrightarrow ( a + i b ) ( \bar{a} - i \bar{b} ) = a \bar{a} + b\bar{b} = 1^2$$ so there are an infinite number of solutions for a, b real. They are conveniently parameterized as $$a = \cos \theta, \; b = \sin \theta$$ or $$( a + i b ) = e^{i \theta}, \; \theta \in \left( - \pi, \, \pi \right] $$.

$$ | a + b | = 1 \Leftrightarrow ( a + b )^2 = a^2 + 2 ab + b^2 = 1^2$$ for a, b real. And there are also an infinite number of solution: $$b = -a \pm 1$$. They can be parameterized as $$a = \ln \, \left| x \right|, \; b = \frac{ x}{ \left| x \right| } - \ln \, \left| x \right| , \; x \neq 0$$
 
Incorrect:

$$ | a+ ib | = 1 \Leftrightarrow ( a + i b ) ( \bar{a} - i \bar{b} ) = a \bar{a} + b\bar{b} = 1^2$$ so there are an infinite number of solutions for a, b real. They are conveniently parameterized as $$a = \cos \theta, \; b = \sin \theta$$ or $$( a + i b ) = e^{i \theta}, \; \theta \in \left( - \pi, \, \pi \right] $$.

$$ | a + b | = 1 \Leftrightarrow ( a + b )^2 = a^2 + 2 ab + b^2 = 1^2$$ for a, b real. And there are also an infinite number of solution: $$b = -a \pm 1$$. They can be parameterized as $$a = \ln \, \left| x \right|, \; b = \frac{ x}{ \left| x \right| } - \ln \, \left| x \right| , \; x \neq 0$$

Sorry, I thought | a+ ib | meant the absolute value; I'm used to seeing || a+ b ||, (ahem), when a,b are real. Now I recall that for complex numbers you can get away with | a+ ib |.
 
Last edited:
So, in an earlier post, I mentioned a metric and an inner product in the same breath, so to speak, and we were talking at the time about tensor products.

Let me offer this totally off-topic aside, which some of you might find amusing......

Recall I defined the bilinear form on $$V$$ by $$V \times V \to \mathbb{R}$$, whereby $$<v,w> \in \mathbb{R}$$ where $$v,\,\,w \in V$$. I mentioned this was not my preferred notation. Let's choose another, say

$$B:V \times V \to \mathbb{R}$$ so that $$B(v,w) \in \mathbb{R}$$

Consider first the form $$b(v,\,\cdot\,)$$ where I have fixed some vector $$v \in V$$, and the "dot" refers to any vector in V of our choice. Then I will have that $$b(v,\,\cdot\,):V \to \mathbb{R}$$.

Let me define $$B(v,\,\cdot\,) \equiv b_1$$ so that for each and any $$w \in V$$ that $$b_1(w) \in \mathbb{R}$$ which can only mean that $$b_1 \in V^*$$, the dual space to V.

Likewise I define $$b(\,\cdot,w) \equiv b_2$$ which, by the same argument is an element in $$V^*$$.

So that $$b_1(w)b_2(v) = b_1\otimes b_2(w,v) \Rightarrow B= b_1 \otimes b_2$$, which is nothing more (and nothing less) than the definition of a tensor, in this case of type (0,2) - and noting that this bilinear form induces the quadratic form by $$B(v,v) = Q(v)$$ which in turn gives us he notion of the length of a vector, I will call this a metric tensor

Let us now suppose that, relative to a basis $$\{\epsilon^j\} \in V^*$$, and assuming $$\alpha,\,\,\beta$$ are elements in the real field, I may write

$$b_1 = \alpha_j \epsilon^j$$ (summation implied) and

$$b_2 = \beta_k \epsilon^k$$, then I will have that

$$B = (\alpha_j \epsilon^j)\otimes (\beta_k \epsilon^k)$$ (implied summation again)

$$ = \alpha_j\beta_k(\epsilon^j \otimes \epsilon^k)$$

Note that the product of the real numbers $$\alpha_j\beta_k$$ is taken element by element, that is....

$$\alpha_1\beta_1,\,\,\alpha_1\beta_2,.....,\alpha_1\beta_n$$

$$\alpha_2\beta_1,\,\,\alpha_2\beta_2,.....,\alpha_2\beta_n$$

..........................................

$$\alpha_n\beta_1,\,\,\alpha_n\beta_2,.....,\alpha_n\beta_n$$

Which is of course a real $$n \times n$$ matrix, which we will call as $$g_{jk}$$, and say this is the component form of our metric tensor.. It happens to be symmetric, that is $$g_{jk} = g_{kj}$$ since $$b(v,w) = b(w,v)$$

Of course this is just kid's stuff - the metric tensor is much more difficult to derive properly. I can try, but it is totally off-topic
 
Last edited:
Sorry, I thought | a+ ib | meant the absolute value;
It's not a vector or matrix norm because complex numbers are numbers.

It still does mean absolute value. $$ | z | = \sqrt{ z \bar{z} }, \quad | a + i b | = \sqrt{ (a + i b) (a - i b ) } = \sqrt{ a^2 + b^2 } , \quad \left| A e^{i B} \right| = \sqrt{ A e^{i B} A e^{-i B} } = \left| A \right| , z \in \mathbb{C}, a, b, A, B \in \mathbb{R} $$
 
One more thing, $$ \alpha, \beta $$ are an orthogonal basis for $$ \mathbb C^2 $$. So that means their inner product is zero.
 
arfa brane said:
Why the sign for addition?


The God said:
....a complex number (a,b) is a number, but (a, b) on a real plane is a point......so how do you write a number with two disjoint parts ? like (component a+i component b), but a point (a,b) has no meaning as a+b...

Because a and ib are numbers (both members of $$\mathbb{C}$$) and so is a + ib.

a = a + i0
ib = 0 + ib
(a) + (ib) = (a + 0) + i(0 + b) = a + ib


Rpenner,

By bringing in maths, you certainly elevated the thread, but there was no need for you to sigh keeping my quote along with Origin's....Origin was simply trolling.
 
Back
Top