Information Technology Reference
In-Depth Information
V 1 ≤···≤
V n .Let I 1 ,...,
{
,...,
}
Let t
I m be a partition of
1
n
, that is, the I j are
= i I j
{
,...,
}
pairwise disjoint and their union is
1
n
.Let U j
V i for 1
j
m .
We can view t as an element of U 1 ≤···≤
U m . Note that the rank of t as an element
of U 1 ≤···≤
U m is a lower bound for the rank of t as an element of V 1 ≤···≤
V n .
Why? Any rank-one tensor
v 1 ≤···≤ v n
V 1 ≤···≤
V n induces a rank-one tensor
U m by setting u j = k I j v k . When it is not clear from
context, whether we think of t being a tensor in U 1 ≤···≤
u 1 ≤···≤
u m
U 1 ≤···≤
U m or V 1 ≤···≤
V n ,
we add it as a subscript.
Lemma 2
R U 1 ≤···≤ U m (
t
)
R V 1 ≤···≤ V n (
t
)
.
k n × n
k n × n
k n × n ,
The rank can indeed become smaller. Consider
n
,
n
,
n
⃐∗
k n × n
k n × n
the tensor of matrix multiplication. If we consider it as a tensor in
(
)
k n × n , then it is a matrix of size n 4
n 2 . Its rank is at most n 2 . However, we know a
×
lower bound of 3 n 2
k n × n
[ Lan12 ]. In fact, it is an old open problem, whether the so-called exponent of matrix
multiplication is two.
n 2
as a tensor in k n × n
k n × n
o
(
)
for the rank of
n
,
n
,
n
6.3.3 Explicit Tensors of Higher Order
n d / 2 Take any full rank matrix M
k N × N , for instance
Let d be even and let N
=
the identity matrix. It has rank n d / 2 . By Lemma 6.2,
n d / 2
R i = 1 k n (
M
)
.
(6.5)
The tensor M is obviously explicit, an entry m i 1 ,..., i d
=
1if
(
i 1 ,...,
i d / 2 ) =
and 0 otherwise. Note that if we could achieve n ( 1 o ( 1 )) d , then this
will lead to formula lower bounds.
It is a sad state of affairs that ( 6.5 ) is the asymptotically best lower bound for
an explicit tensor that we currently know; further improvements just concern the
constant factor.
Here is one such improvement which uses a lower bound by Hartmann [ Har85 ]:
Let k be a field and K be an extension field of dimension n . Consider the multipli-
cation of the K -left module K 1 × m as a bilinear map over k .(Wetake x
(
i d / 2 + 1 ,...,
i d )
K and
K 1 × m
K 1 × m ). However, we
(
y 1 ,...,
y m )
and map them to
(
xy 1 ,...,
xy m )
view this as a k -bilinear map and not as a K -bilinear map. Let
s be the corresponding
tensor. Hartmann showed that
K 1 × m
R
(
) (
2 n
1
)
m
=
2 nm
m
.
(6.6)
n e 1 and let s
K ( 2 e + 1 ) be the tensor corresponding to
If we now set m
=
s , we get
2 n e
n e 1
R
(
s
)
Search WWH ::




Custom Search