symmetric groups – Geometric or combinatorial interpretations of the (weak) Bruhat order?

$DeclareMathOperatorInv{Inv}$The weak Bruhat order on the symmetric group has a straightforward combinatorial interpretation: Consider a set of labelled balls $1,2,dotsc,n$. Then for two permutations $sigma_1,sigma_2in S_n$, $sigma_1leq sigma_2$ in the weak Bruhat order iff the set of “upsets” in the ordering $1<2<dotsb<n$ induced by the action of $sigma_1$ on the balls is contained in the set of upsets induced by the action of $sigma_2$ (more formally, $sigma_1leq sigma_2$ iff $Inv(sigma_1)subset Inv(sigma_2)$, where $Inv(sigma)={(i,j) mathrelvert ileq j, sigma(i)geq sigma(j)}$). This also gives an interpretation of the rank function for the poset of permutations ordered this way (the number of inversions).

I’m wondering if anyone knows similar interpretations for the weak Bruhat order in more general Coxeter groups. For instance, is there always a set of labelled objects for which the Bruhat order tracks “upsets”?

combinatorics – Combinatorial Proof regarding Falling Factorials and sums of Falling Factorials

Prove that for every $n$ and $k$ satisfying $1 le k le n$,

$$(n)_k = sum_{i = k}^n k cdot (i-1)_{k-1}$$

I tried using a combinatorial proof as follows:

Assume we want to pick $k$ distinct balls from a group of $n$ balls. The LHS obviously counts the number of ways we can do this.

The RHS is the sum of ways we can pick $k-1$ balls from smaller sets of balls and incrementally adding to the set until we have our original size of $n$.

There are two things wrong with my interpretation of the right side:

  1. I completely ignore the $k$ in front of $(i-1)$. I am not sure of it’s relevance and why it works. I originally thought we are multiplying by the number of ways we can rearrange the balls, but I quickly realized that’s taken care of in the $(i-1)_{k-1}$ term.
  2. My interpretation really does not mean anything in the context of the problem. Specifically the $k-1$ and why picking from a smaller group of balls allows us to get to our total number of ways.

I’d appreciate if anyone can provide some intuition in understanding what the right side is telling me and some steps in the right direction.

additive combinatorics – How to prove the combinatorial identity $sum_{k=ell}^{n}binom{2n-k-1}{n-1}k2^k=2^ell nbinom{2n-ell}{n}$ for $ngeellge0$?

With the aid of the simple identity
begin{equation*}
sum_{k=0}^{n}binom{n+k}{k}frac{1}{2^{k}}=2^n
end{equation*}

in Item (1.79) on page 35 of the monograph

R. Sprugnoli, Riordan Array Proofs of Identities in Gould’s Book, University of Florence, Italy, 2006. (Has this monograph been formally published somewhere?)

I proved the combinatorial identity
$$
sum_{k=1}^{n}binom{2n-k-1}{n-1}k2^k=nbinom{2n}{n}, quad ninmathbb{N}.
$$

My question is: how to prove the more general combinatorial identity
$$
sum_{k=ell}^{n}binom{2n-k-1}{n-1}k2^k=2^ell nbinom{2n-ell}{n}
$$

for $ngeellge0$?

combinatorial optimization – Upper bound in minimization problem

If we have tight lower bounds (infeasible or optimal) for optimization problem (minimization)

Is it important in minimization problem to generate feasible upper bound? If it is, what is the best current methods to generate tight upper bounds for minimization problem?

How to benefit from both of them to generate good solution? I need references or any example of approaches used them together.

combinatorics – “Unbalanced” combinatorial designs

A combinatorial design on a set $X$ (which I’ll call players) of size $n$ is a collection of subsets of $X$ (which I’ll call games) such that:

  • Each player is in exactly $r$ games.
  • Each game contains exactly $s$ players.
  • Each pair of players are together in exactly $t$ games.

(actually, I think a combinatorial design is something a bit more general than this, but it’s not relevant to this question)

Obviously, an application would be scheduling a tournament in which each game involves $s$ players. There are books out there on combinatorial designs, and one of the fundamental questions is: for which values of the parameters $(n, r, s, t)$ does a combinatorial design exist?

I would like to know the answer to that question, and the related question of how we can construct such designs in practice, however, I’d like to relax the last constraint significantly to:

  • Each pair of players are together in at least one game.

I can’t find anything on this version of the problem. Is it equivalent to something simpler with a different name?

combinatorial optimization – Integer programming for bin covering problem

I encounter an integer programming problem like this:

Suppose a student needs to take exams in n courses {math, physics, literature, etc}. To pass the exam in course i, the student needs to spend an amount of effort e_i on course i. The student can graduate if she/he passes 60% of the n courses (courses have different weights). The objective is to allocate her/his efforts to different courses such that the student can graduate with the minimal amount of efforts spent on courses.

I think this problem is similar to bin covering problem when there is only one bin. The formulation is simple. Use x_iin{0,1} to denote whether the student allocates effort to course i. Let w_i denote the weight of course i in calculating the final score.

Min sum x_i e_i

s.t. sum x_i w_i >= 60% * n (or some other predetermined threshold)

My question is, is there a simple heuristic solution for this problem?

combinatorics – Combinatorial evaluation of a sum involving powers of 2

I’d like to figure out how to evaluate $sum_{k=1}^n (n-k) 2^{k-1}$ using a counting argument. From differentiating the geometric series formula and simplifying, I know that the answer is $2^n – n – 1$, but I’d like to find a combinatorial way to arrive at this.

This is the second part of Problem 1.3 in A Course in Enumeration by Aigner. The first part is to show that $sum_{k=0}^n 2^k = 2^{n+1} – 1$. Specifically, both parts are to be solved using the “sum rule,” which states that if a set $S$ is the union of pairwise disjoint sets $S_1, S_2, ldots, S_n$, then $|S| = |S_1| + |S_2| + cdots + |S_n|$.

I solved the first part as follows. Let $S = { a_1, a_2, ldots, a_{n+1} }$. Let $S_1$ be the set of subsets of $S$ that include $a_1$, so $|S_1| = 2^n$. Let $S_2$ be the set of subsets of $S$ that include $a_2$ but not $a_1$, so $|S_2| = 2^{n-1}$, and so on: $S_i$ is the set of subsets of $S$ that include $a_i$ but not any of $a_1, ldots, a_{i-1}$, so $|S_i| = 2^{n+1-i}$ where $i$ ranges from $1$ to $n+1$, inclusive. Then $sum_{i=1}^{n+1} |S_i| = sum_{k=0}^n 2^k$ (reversing the order of the sum and shifting the index). But now observe that the $S_i$ are pairwise disjoint and their union is the set of all subsets of $S$ that have at least one element, i.e. the power set of $S$ with the empty set excluded. This has cardinality $2^{n+1} – 1$, establishing the identity.

I’ve thought about it for a while, but I’m having a lot of trouble finding a similar proof by counting for the second part of the problem. I’m inclined to interpret $2^n – n – 1$ as the cardinality of the power set of a set $S = { a_1, ldots, a_n }$, excluding the empty set and one-element sets. I’m not sure about $sum_{k=1}^n (n-k) 2^{k-1}$; I’m thinking something along the lines of number of subsets of a $(k-1)$-element set for $2^{k-1}$ and some choice of one among $n-k$ of the remaining elements for the $n-k$ factor; of course this leaves one element of $S$ left over. Where I’m running into a wall is figuring out how to find disjoint sets that involve these choices. Any help is appreciated!

P.S. This is not for a course, it’s just part of my own reading. It seems I should know a combinatorial way of evaluating such a simple sum…

np complete – An unknown combinatorial optimization problem

I have $N$ available sensors and $M$ devices. Each device needs $a$ sensors. One sensor cannot be used on multiple devices. Each sensor has two properties defined by $H$ and $R$.

Let $sigma_{i_H}$ be the standard deviation of property $H$ for sensors on device $i$. Similarly, $sigma_{i_R}$ is the standard deviation of property $R$ for sensors on device $i$.

Now let $s_{H}=sqrt{sum_{i=1}^{M} sigma_{i_H}^2 }$ of all devices for the $H$ property. And $s_{R}=sqrt{sum_{i=1}^{M} sigma_{i_R}^2 }$ of all devices for the $R$ property.

The goal is to minimize $mu=frac{s_H + s_R}{2}$ .

Looking for guidance on which type of optimization problem this might be and for inspiration for different search algorithms.

co.combinatorics – Combinatorial models for the bivariate Hermite polynomials $H_n(x+y)$

This is really two questions in one. First I need a proof or disproof of the conjectured switchboard/diner model conjectured below. Second I would like to know of other models.

The unsigned, Chebyshev, or probabilist’s, Hermite polynomials $H_n(x)$ (OEIS A099174, Wikipedia, MathWorld, see bottom for the first few) have the exponential generating function

$e^{t^2/2} ; e^{tx} = e^{th.} ; e^{tx} = e^{t(h.+x)} = e^{tH.(x)} = sum_{n geq 0} H_n(x) ; frac{t^n}{n!} $

where $e^{t^2/2} = e^{h.t}$ with $h_n$ the aerated, odd double factorials OEIS A001147

and

$H_n(x) = (h.+x)^n = sum_{k=0}^n ; binom{n}{k} ; h_{k} ; x^{n-k}.$

They are classic Sheffer Appell polynomials (two other Appell sequences are the Bernoulli and the fundamental powers $p_n(x) = x^n$) with the raising op $R_H = x + d/dx = x + D$ and. as for all Appell sequences, the lowering op $L_H = D$; that is,

$R_H ; H_{n+1}(x) = (x+D) ; H_n(x) = H_{n+1}(x)$

and

$L_H ; H_n = D ; H_n(x) = D ; (h.+x)^n = n ; H_{n-1}(x).$

Consequently,

$R_H^n 1 = (x +D)^n 1 = H_n(x),$

and it turns out (see OEIS A344678) that the normal ordering of $(x+D)^n$, i.e., ordering all derivatives to the right of any $x$ via the Leibniz Lie commutator $(D,x) = Dx – xD = 1$, gives an expression equivalent to $H_n(x+y)$ with all $y$‘s to the right. For example, the noncommutative operations

$(x+D)^2 = xx + xD + Dx + DD = x^2 +xD + xD +1 + D^2 = x^2 + 2xD + 1 + D^2$

give the same result as the commutative operations

$H_2(x+y) = (h.+x+y)^2 = (x +H.(y))^2 = x^2 + 2xH_1(y) + H_2(y) = x^2 + 2xy + 1 + y^2$

or

$H_2(x+y) = (x+y)^2 + 1 = x^2 + 2xy + 1 + y^2.$

I can prove this general equivalence in results of the commutative calculation of the $H_n(x+y)$ polynomials and the normal ordering of the $2^n$ permutations, $(x+D)^n$, of the symbols $x$ and $D$ subject to the Leibniz commutator relation $(D,x) = 1$ of the Heisenberg-Weyl algebra.

This naturally generalizes to the ladder ops–the lowering/destruction/annihilation, $L$, and raising/creation, $R$, ops–of any Sheffer polynomial sequence, giving the equivalence in form of the monomial rep of $H_n(x+y)$ and the normal ordering of $(L+R)^n$.

Now for the combinatorial models:

The Donaghey ref in OEIS A005425 gives $H_n(2)$ as the number of ways $n$ subscribers to a switchboard could be talking either to another subscriber, someone else on an outside line, or not at all–no conference calls allowed; i.e., a person can be talking at most with one other person. This can be viewed as a dinner scenario with each diner among $n$ diners either exchanging seats with another diner; remaining seated; or getting up, changing plans, and sitting back down–at most only one exchange per person allowed.

Apparently, from examining the coefficients of the distinct monomials of the first four $H_n(x+y)$, the coefficients give a finer tabulation of these exchanges. The first five are

$H_0(x+y) = 1,$

$H_1(x+y) = x + y,$

$H_2(x+y) = x^2 + 2 x y + 1 + y^2,$

$H_3(x) = x^3 + 3 x^2 y + 3 x + 3 x y^2 + 3 y + y^3,$

$H_4(x+y) = x^4 + 4 x^2 y + 6 x^2 + 6 x^2 y^2 + 12 x y + 4 x y^3 + 3 + 6 y^2 + y^4,$

$H_5(x+y) = x^5 + 5 x^4 y + 10 x^3 + 10 x^3 y^2 + 30 x^2 y + 10 x^2 y^3 + 15 x + 30 x y^2 + 5 x y^4 + 15 y + 10 y^3 + y^5.$

Examples of the relation to the switchboard scenario:

$H_1(x+y)$ correspond to one subscriber either offline or talking to a non-subscriber via an outside line.

The coefficient $12$ in $H_4(x+y)$ corresponds to the number of ways that, among $4$ subscribers, one pair is in mutual conversation, another subscriber is on an outside line, and the remaining subscriber is offline. The $3$ in the polynomial corresponds to the number of ways two pairs among the four subscribers could be talking.

The Hermite polynomials and their relationships to diverse combinatorial and analytic scenarios have been fairly thoroughly researched and much has been written on the various families of Hermite polynomials, so though there is likely one in the literature, it’s rather hard to find a proof of the above conjecture. Can anyone provide a proof or a link to one?



Some analytics and refs that might prove useful:

Again $H_n(2)$ is given by A005425 = 1, 2, 5, 14, 43, 142, 499, 1850 … .

Number of monomials for each polynomial is 1, 2, 4, 6, 9, 12, 16, 20, 25 … A002620(n).

$h_n$ are the aerated odd double factorials A001147 1, 0, 1, 0, 3, 0, 15, 0, 105, …

$H_n(x+y) = (H.(x)+y)^n = (h.+x+y)^n$.

The coefficient of $x^k y^m$ in $H_n(x+y)$ is $frac{n!}{(n-k-m)! ; k! ; m!} ; h_{n-k-m} $.

The first few Hermite polynomials are (unsigned A099174)

$H_0(x) = 1,$

$H_1(x) = x,$

$H_2(x) = x^2 + 1,$

$H_3(x) = x^3 + 3x,$

$H_4(x)= x^4 + 6 x^2 + 3,$

$H_5(x) = x^5 + 10x^3 + 15x,$

$H_6(x) = x^6 + 15x^4 + 45x^2 + 15.$

“Combinatorial Models of Creation-Annihilation” by Blasiak and Flajolet.