mathematical optimization – iterate Minimize a list of functions

I have a role $ f $ defined on $ (- 1.1) $.
The definition is sufficient for a minimal example

f(z_):=z^2 - 1

I have to find a list of points so that $ f (z_0) = f (z_1) $Dots whose picture is "on the same level".
I continued by finding the minimum above

  min = First@Minimize(f(z), {z})

what occurs $ z $ equal

argmin = Values@Last@Minimize(f(z), {z})

I also made a list with

  rang = Subdivide(a, 0,10)

over the range from the minimum to the predefined value.

Now I want to find points for each element of this list $ f (z_0) = f (z_1) = rang_j $for each element of the list.

I couldn't find a better plan than defining a list of functions $ fun_j = (f (z) + rang_j) ^ 2 $. By moving the original function and squaring I'm sure the functions $ fun_j $, one for each item in the list $ rang $ are positive everywhere except the roots.

I then wanted to go through the list of functions to a limited minimization via the commands (the argument) $ f_j $ Since I am only used to clarify my question, I understand that the sintax will actually be different. That is exactly what the question is about.

   Minimize({f_j, z > argmin}, {z})
   Minimize({f_j, z > argmin}, {z})

That is, two minimizations are performed, one to the left and one to the right of the $ {arg , min} $. I know for mathematical reasons that there are two unique solutions.

I create my list of functions as

 f1(z_,c_):=f(z)+c

and then with

 f1(z,rang)

But I have problems with minimizing iteration. Any suggestion would be helpful.

I'll try it

  Minimize({f1( z, rang), z > b}, z)

returns an error message because the Minimize function argument is expected to be a scalar function.
I would also like to hear about better methods in general and related to Mathematica.
cheers

Optimization – How can I vectorize and optimize the function in c to get random numbers?

I have programmed a function that generates random numbers.
However, I cannot get my compiler to vectorize this function.
How can I transform this function so that my compiler can vectorize it and shorten the execution time of the program?

    unsigned int seed;
    unsigned int temp;
    #define val13 13
    unsigned int var1 = 214013;
    unsigned int var2 = 2531011;
    inline int myRandom() {
      temp = var1*seed;
      seed = temp + var2;
      return (seed>>val13);
    }

Reference requirement – Computational complexity of optimization algorithms using the random algorithm theory

A fundamental and undoubtedly much studied problem is not only to determine whether an optimization algorithm converges to its optimum or not, but also how quickly it converges (see a discussion on how this can be measured here: https: // mathoverflow. net / a / 90920/47228). I'm interested in whether random algorithm theory techniques were used to investigate this question (either in a very concrete or a very abstract environment). The type of question I think that I think such an approach could answer would be the following:

If you draw a starting point equally randomly from a given set of potential starting points, the algorithm converges with probability 1-$ epsilon $ in less than $ N $ Iterations.

As you can see, the question is not particularly specific, but this is deliberate: I am interested in ideas / references at every level of the general public and for every type of optimization technique / algorithm.

Thank you in advance. 🙂 🙂

Global Optimization – Instructors optimized the schedule task

I am trying to solve an interesting math problem.

Let us imagine that we have a number of trainers with different time spans during the day they work or are available. We need to show students time slots in which to book within the teacher's available time. Within this period we have to optimize the dates so that there is a minimum number of "idle" time slots. Once an appointment has been made, we can no longer postpone the appointment. An appointment must be negotiated without negotiating with other students. Appointments have variable and unknown lengths?

Any ideas guys?

oc.optimization and control – advantages and disadvantages of using integer programming alone or combined integer and global optimization?

First, I'm not sure if this is the right question in this forum. But I searched for answers for a long time and also asked the "engineering" professors of my university, but I don't seem to get a mathematical answer.

I'm trying to solve a complex optimization problem that involves network and short path optimization, while solving nonlinear pressures, flows, and diameters. I can use one of the following methods:

  1. The first method is to use a mixed integer linear programming method
    (MILP) only and linearize the pressure, flow and diameter with
    piecewise linear approximation. The linearization has been done since
    MILP only uses linear equations and looks for an optimal solution.

  2. The second method is to use a combined local and global optimization
    Optimization method. Local optimization would use MILP first
    Find a solution to sub-problems where continuous optimization would be possible
    expensive to use (e.g. allocation of production). Then I would use one
    global optimization method using a derivative-free genetic algorithm
    disrupt the system (e.g. the network path) and
    find a better global solution. Whenever the system is disrupted
    The local MILP optimization is repeated.

I'm after recommendations, heads-ups, and things to look out for when implementing either of these methods. Is the second method mathematically acceptable compared to the first method?

Optimization – How can this line break algorithm look at spaces other than 1.0?

The Divide & Conquer line break algorithm The one described here is given below in both Python and Dart (similar to Java / C #).

Line break is also referred to as "line break", "line break", or "paragraph formation" and this algorithm is used to achieve minimal irregularity.

This algorithm works, but considers every room to be accurate width = 1.0 .

My question:

How can I change this algorithm to ignore spaces? In other words, let spaces be considered wide 0.0? (or it would work for me if I could define any desired width for the rooms, including 0.0).

Python implementation:

def divide(text, width):
    words = text.split()
    count = len(words)
    offsets = (0)
    for w in words:
        offsets.append(offsets(-1) + len(w))

    minima = (0) + (10 ** 20) * count
    breaks = (0) * (count + 1)

    def cost(i, j):
        w = offsets(j) - offsets(i) + j - i - 1
        if w > width:
            return 10 ** 10
        return minima(i) + (width - w) ** 2

    def search(i0, j0, i1, j1):
        stack = ((i0, j0, i1, j1))
        while stack:
            i0, j0, i1, j1 = stack.pop()
            if j0 < j1:
                j = (j0 + j1) // 2
                for i in range(i0, i1):
                    c = cost(i, j)
                    if c <= minima(j):
                        minima(j) = c
                        breaks(j) = i
                stack.append((breaks(j), j+1, i1, j1))
                stack.append((i0, j0, breaks(j)+1, j))

    n = count + 1
    i = 0
    offset = 0
    while True:
        r = min(n, 2 ** (i + 1))
        edge = 2 ** i + offset
        search(0 + offset, edge, edge, r + offset)
        x = minima(r - 1 + offset)
        for j in range(2 ** i, r - 1):
            y = cost(j + offset, r - 1 + offset)
            if y <= x:
                n -= j
                i = 0
                offset += j
                break
        else:
            if r == n:
                break
            i = i + 1

    lines = ()
    j = count
    while j > 0:
        i = breaks(j)
        lines.append(' '.join(words(i:j)))
        j = i
    lines.reverse()
    return lines

Darts implementation:

class MinimumRaggedness {

  /// Given some (boxWidths), break it into the smallest possible number
  /// of lines such as each line has width not larger than (maxWidth).
  /// It also minimizes the difference between width of each line,
  /// achieving a "balanced" result.
  /// Spacing between boxes is 1.0.
  static List> divide(List boxWidths, num maxWidth) {

    int count = boxWidths.length;
    List offsets = (0);

    for (num boxWidth in boxWidths) {
      offsets.add(offsets.last + min(boxWidth, maxWidth));
    }

    List minimum = (0)..addAll(List.filled(count, 9223372036854775807));
    List breaks = List.filled(count + 1, 0);

    num cost(int i, int j) {
      num width = offsets(j) - offsets(i) + j - i - 1;
      if (width > maxWidth)
        return 9223372036854775806;
      else
        return minimum(i) + pow(maxWidth - width, 2);
    }

    void search(int i0, int j0, int i1, int j1) {
      Queue> stack = Queue()..add((i0, j0, i1, j1));

      while (stack.isNotEmpty) {
        List info = stack.removeLast();
        i0 = info(0);
        j0 = info(1);
        i1 = info(2);
        j1 = info(3);

        if (j0 < j1) {
          int j = (j0 + j1) ~/ 2;

          for (int i = i0; i < i1; i++) {
            num c = cost(i, j);
            if (c <= minimum(j)) {
              minimum(j) = c;
              breaks(j) = i;
            }
          }

          stack.add((breaks(j), j + 1, i1, j1));
          stack.add((i0, j0, breaks(j) + 1, j));
        }
      }
    }

    int n = count + 1;
    int i = 0;
    int offset = 0;

    while (true) {
      int r = min(n, pow(2, i + 1));
      int edge = pow(2, i) + offset;
      search(0 + offset, edge, edge, r + offset);
      num x = minimum(r - 1 + offset);

      bool flag = true;
      for (int j = pow(2, i); j < r - 1; j++) {
        num y = cost(j + offset, r - 1 + offset);
        if (y <= x) {
          n -= j;
          i = 0;
          offset += j;
          flag = false;
          break;
        }
      }

      if (flag) {
        if (r == n) break;
        i = i + 1;
      }
    }

    int j = count;

    List> indexes = ();

    while (j > 0) {
      int i = breaks(j);
      indexes.add(List.generate(j - i, (index) => index + i));
      j = i;
    }

    return indexes.reversed.toList();
  }
}