Generalization of the Jack polynomial
In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
The Jack function
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})}
of an integer partition
κ
{\displaystyle \kappa }
, parameter
α
{\displaystyle \alpha }
, and arguments
x
1
,
x
2
,
…
,
x
m
{\displaystyle x_{1},x_{2},\ldots ,x_{m}}
can be recursively defined as
follows:
- For m=1
-
J
k
(
α
)
(
x
1
)
=
x
1
k
(
1
+
α
)
⋯
(
1
+
(
k
−
1
)
α
)
{\displaystyle J_{k}^{(\alpha )}(x_{1})=x_{1}^{k}(1+\alpha )\cdots (1+(k-1)\alpha )}

- For m>1
-
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
=
∑
μ
J
μ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
−
1
)
x
m
|
κ
/
μ
|
β
κ
μ
,
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=\sum _{\mu }J_{\mu }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m-1})x_{m}^{|\kappa /\mu |}\beta _{\kappa \mu },}

where the summation is over all partitions
μ
{\displaystyle \mu }
such that the skew partition
κ
/
μ
{\displaystyle \kappa /\mu }
is a horizontal strip, namely
-
κ
1
≥
μ
1
≥
κ
2
≥
μ
2
≥
⋯
≥
κ
n
−
1
≥
μ
n
−
1
≥
κ
n
{\displaystyle \kappa _{1}\geq \mu _{1}\geq \kappa _{2}\geq \mu _{2}\geq \cdots \geq \kappa _{n-1}\geq \mu _{n-1}\geq \kappa _{n}}
(
μ
n
{\displaystyle \mu _{n}}
must be zero or otherwise
J
μ
(
x
1
,
…
,
x
n
−
1
)
=
0
{\displaystyle J_{\mu }(x_{1},\ldots ,x_{n-1})=0}
) and
-
β
κ
μ
=
∏
(
i
,
j
)
∈
κ
B
κ
μ
κ
(
i
,
j
)
∏
(
i
,
j
)
∈
μ
B
κ
μ
μ
(
i
,
j
)
,
{\displaystyle \beta _{\kappa \mu }={\frac {\prod _{(i,j)\in \kappa }B_{\kappa \mu }^{\kappa }(i,j)}{\prod _{(i,j)\in \mu }B_{\kappa \mu }^{\mu }(i,j)}},}

where
B
κ
μ
ν
(
i
,
j
)
{\displaystyle B_{\kappa \mu }^{\nu }(i,j)}
equals
κ
j
′
−
i
+
α
(
κ
i
−
j
+
1
)
{\displaystyle \kappa _{j}'-i+\alpha (\kappa _{i}-j+1)}
if
κ
j
′
=
μ
j
′
{\displaystyle \kappa _{j}'=\mu _{j}'}
and
κ
j
′
−
i
+
1
+
α
(
κ
i
−
j
)
{\displaystyle \kappa _{j}'-i+1+\alpha (\kappa _{i}-j)}
otherwise. The expressions
κ
′
{\displaystyle \kappa '}
and
μ
′
{\displaystyle \mu '}
refer to the conjugate partitions of
κ
{\displaystyle \kappa }
and
μ
{\displaystyle \mu }
, respectively. The notation
(
i
,
j
)
∈
κ
{\displaystyle (i,j)\in \kappa }
means that the product is taken over all coordinates
(
i
,
j
)
{\displaystyle (i,j)}
of boxes in the Young diagram of the partition
κ
{\displaystyle \kappa }
.
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials
J
μ
(
α
)
{\displaystyle J_{\mu }^{(\alpha )}}
in n variables:
-
J
μ
(
α
)
=
∑
T
d
T
(
α
)
∏
s
∈
T
x
T
(
s
)
.
{\displaystyle J_{\mu }^{(\alpha )}=\sum _{T}d_{T}(\alpha )\prod _{s\in T}x_{T(s)}.}

The sum is taken over all admissible tableaux of shape
λ
,
{\displaystyle \lambda ,}
and
-
d
T
(
α
)
=
∏
s
∈
T
critical
d
λ
(
α
)
(
s
)
{\displaystyle d_{T}(\alpha )=\prod _{s\in T{\text{ critical}}}d_{\lambda }(\alpha )(s)}

with
-
d
λ
(
α
)
(
s
)
=
α
(
a
λ
(
s
)
+
1
)
+
(
l
λ
(
s
)
+
1
)
.
{\displaystyle d_{\lambda }(\alpha )(s)=\alpha (a_{\lambda }(s)+1)+(l_{\lambda }(s)+1).}

An admissible tableau of shape
λ
{\displaystyle \lambda }
is a filling of the Young diagram
λ
{\displaystyle \lambda }
with numbers 1,2,…,n such that for any box (i,j) in the tableau,
-
T
(
i
,
j
)
≠
T
(
i
′
,
j
)
{\displaystyle T(i,j)\neq T(i',j)}
whenever
i
′
>
i
.
{\displaystyle i'>i.}

-
T
(
i
,
j
)
≠
T
(
i
,
j
−
1
)
{\displaystyle T(i,j)\neq T(i,j-1)}
whenever
j
>
1
{\displaystyle j>1}
and
i
′
<
i
.
{\displaystyle i'<i.}

A box
s
=
(
i
,
j
)
∈
λ
{\displaystyle s=(i,j)\in \lambda }
is critical for the tableau T if
j
>
1
{\displaystyle j>1}
and
T
(
i
,
j
)
=
T
(
i
,
j
−
1
)
.
{\displaystyle T(i,j)=T(i,j-1).}
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
-
⟨
f
,
g
⟩
=
∫
[
0
,
2
π
]
n
f
(
e
i
θ
1
,
…
,
e
i
θ
n
)
g
(
e
i
θ
1
,
…
,
e
i
θ
n
)
¯
∏
1
≤
j
<
k
≤
n
|
e
i
θ
j
−
e
i
θ
k
|
2
α
d
θ
1
⋯
d
θ
n
{\displaystyle \langle f,g\rangle =\int _{[0,2\pi ]^{n}}f\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right){\overline {g\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right)}}\prod _{1\leq j<k\leq n}\left|e^{i\theta _{j}}-e^{i\theta _{k}}\right|^{\frac {2}{\alpha }}d\theta _{1}\cdots d\theta _{n}}
![{\displaystyle \langle f,g\rangle =\int _{[0,2\pi ]^{n}}f\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right){\overline {g\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right)}}\prod _{1\leq j<k\leq n}\left|e^{i\theta _{j}}-e^{i\theta _{k}}\right|^{\frac {2}{\alpha }}d\theta _{1}\cdots d\theta _{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ca582e0a32a9dc292850e71c6dd10422a1cc0029)
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
-
C
κ
(
α
)
(
x
1
,
…
,
x
n
)
=
α
|
κ
|
(
|
κ
|
)
!
j
κ
J
κ
(
α
)
(
x
1
,
…
,
x
n
)
,
{\displaystyle C_{\kappa }^{(\alpha )}(x_{1},\ldots ,x_{n})={\frac {\alpha ^{|\kappa |}(|\kappa |)!}{j_{\kappa }}}J_{\kappa }^{(\alpha )}(x_{1},\ldots ,x_{n}),}

where
-
j
κ
=
∏
(
i
,
j
)
∈
κ
(
κ
j
′
−
i
+
α
(
κ
i
−
j
+
1
)
)
(
κ
j
′
−
i
+
1
+
α
(
κ
i
−
j
)
)
.
{\displaystyle j_{\kappa }=\prod _{(i,j)\in \kappa }\left(\kappa _{j}'-i+\alpha \left(\kappa _{i}-j+1\right)\right)\left(\kappa _{j}'-i+1+\alpha \left(\kappa _{i}-j\right)\right).}

For
α
=
2
,
C
κ
(
2
)
(
x
1
,
…
,
x
n
)
{\displaystyle \alpha =2,C_{\kappa }^{(2)}(x_{1},\ldots ,x_{n})}
is often denoted by
C
κ
(
x
1
,
…
,
x
n
)
{\displaystyle C_{\kappa }(x_{1},\ldots ,x_{n})}
and called the Zonal polynomial.
The P normalization is given by the identity
J
λ
=
H
λ
′
P
λ
{\displaystyle J_{\lambda }=H'_{\lambda }P_{\lambda }}
, where
-
H
λ
′
=
∏
s
∈
λ
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
1
)
{\displaystyle H'_{\lambda }=\prod _{s\in \lambda }(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}

where
a
λ
{\displaystyle a_{\lambda }}
and
l
λ
{\displaystyle l_{\lambda }}
denotes the arm and leg length respectively. Therefore, for
α
=
1
,
P
λ
{\displaystyle \alpha =1,P_{\lambda }}
is the usual Schur function.
Similar to Schur polynomials,
P
λ
{\displaystyle P_{\lambda }}
can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter
α
{\displaystyle \alpha }
.
Thus, a formula for the Jack function
P
λ
{\displaystyle P_{\lambda }}
is given by
-
P
λ
=
∑
T
ψ
T
(
α
)
∏
s
∈
λ
x
T
(
s
)
{\displaystyle P_{\lambda }=\sum _{T}\psi _{T}(\alpha )\prod _{s\in \lambda }x_{T(s)}}

where the sum is taken over all tableaux of shape
λ
{\displaystyle \lambda }
, and
T
(
s
)
{\displaystyle T(s)}
denotes the entry in box s of T.
The weight
ψ
T
(
α
)
{\displaystyle \psi _{T}(\alpha )}
can be defined in the following fashion: Each tableau T of shape
λ
{\displaystyle \lambda }
can be interpreted as a sequence of partitions
-
∅
=
ν
1
→
ν
2
→
⋯
→
ν
n
=
λ
{\displaystyle \emptyset =\nu _{1}\to \nu _{2}\to \dots \to \nu _{n}=\lambda }

where
ν
i
+
1
/
ν
i
{\displaystyle \nu _{i+1}/\nu _{i}}
defines the skew shape with content i in T. Then
-
ψ
T
(
α
)
=
∏
i
ψ
ν
i
+
1
/
ν
i
(
α
)
{\displaystyle \psi _{T}(\alpha )=\prod _{i}\psi _{\nu _{i+1}/\nu _{i}}(\alpha )}

where
-
ψ
λ
/
μ
(
α
)
=
∏
s
∈
R
λ
/
μ
−
C
λ
/
μ
(
α
a
μ
(
s
)
+
l
μ
(
s
)
+
1
)
(
α
a
μ
(
s
)
+
l
μ
(
s
)
+
α
)
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
α
)
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
1
)
{\displaystyle \psi _{\lambda /\mu }(\alpha )=\prod _{s\in R_{\lambda /\mu }-C_{\lambda /\mu }}{\frac {(\alpha a_{\mu }(s)+l_{\mu }(s)+1)}{(\alpha a_{\mu }(s)+l_{\mu }(s)+\alpha )}}{\frac {(\alpha a_{\lambda }(s)+l_{\lambda }(s)+\alpha )}{(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}}}

and the product is taken only over all boxes s in
λ
{\displaystyle \lambda }
such that s has a box from
λ
/
μ
{\displaystyle \lambda /\mu }
in the same row, but not in the same column.
Connection with the Schur polynomial
[edit]
When
α
=
1
{\displaystyle \alpha =1}
the Jack function is a scalar multiple of the Schur polynomial
-
J
κ
(
1
)
(
x
1
,
x
2
,
…
,
x
n
)
=
H
κ
s
κ
(
x
1
,
x
2
,
…
,
x
n
)
,
{\displaystyle J_{\kappa }^{(1)}(x_{1},x_{2},\ldots ,x_{n})=H_{\kappa }s_{\kappa }(x_{1},x_{2},\ldots ,x_{n}),}

where
-
H
κ
=
∏
(
i
,
j
)
∈
κ
h
κ
(
i
,
j
)
=
∏
(
i
,
j
)
∈
κ
(
κ
i
+
κ
j
′
−
i
−
j
+
1
)
{\displaystyle H_{\kappa }=\prod _{(i,j)\in \kappa }h_{\kappa }(i,j)=\prod _{(i,j)\in \kappa }(\kappa _{i}+\kappa _{j}'-i-j+1)}

is the product of all hook lengths of
κ
{\displaystyle \kappa }
.
If the partition has more parts than the number of variables, then the Jack function is 0:
-
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
=
0
,
if
κ
m
+
1
>
0.
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=0,{\mbox{ if }}\kappa _{m+1}>0.}

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If
X
{\displaystyle X}
is a matrix with eigenvalues
x
1
,
x
2
,
…
,
x
m
{\displaystyle x_{1},x_{2},\ldots ,x_{m}}
, then
-
J
κ
(
α
)
(
X
)
=
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
.
{\displaystyle J_{\kappa }^{(\alpha )}(X)=J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m}).}

- Demmel, James; Koev, Plamen (2006), "Accurate and efficient evaluation of Schur and Jack functions", Mathematics of Computation, 75 (253): 223–239, CiteSeerX 10.1.1.134.5248, doi:10.1090/S0025-5718-05-01780-1, MR 2176397.
- Jack, Henry (1970–1971), "A class of symmetric polynomials with a parameter", Proceedings of the Royal Society of Edinburgh, Section A. Mathematics, 69: 1–18, MR 0289462.
- Knop, Friedrich; Sahi, Siddhartha (19 March 1997), "A recursion and a combinatorial formula for Jack polynomials", Inventiones Mathematicae, 128 (1): 9–22, arXiv:q-alg/9610016, Bibcode:1997InMat.128....9K, doi:10.1007/s002220050134, S2CID 7188322
- Macdonald, I. G. (1995), Symmetric functions and Hall polynomials, Oxford Mathematical Monographs (2nd ed.), New York: Oxford University Press, ISBN 978-0-19-853489-1, MR 1354144
- Stanley, Richard P. (1989), "Some combinatorial properties of Jack symmetric functions", Advances in Mathematics, 77 (1): 76–115, doi:10.1016/0001-8708(89)90015-7, MR 1014073.