Which tree or diagram structure to solve this problem?

I have the following interface, it defines a loadable resource that depends on other resources that should be loaded first, so it can load itself:

Enter image description here

It does its job, the Find Method to search for a specific dependency Load Method.

Now I have another problem for which I have not found an elegant solution.

Here is an example of what I would like to model using explanations:

(click to enlarge)

Enter image description here

Legend of the elements, starting from the top:

  • Level represents a game level, it has an arbitrary amount of content
  • Sky. Scene and Track are some of the contents
    • These can consist of several contents, eg. to the Scene it is Scene 1 and Scene N
  • The next elements represent different file types that each content needs
  • the last level, Storagefrom where they are to be loaded

Note: The direction of the arrows indicates what an object depends on so that it can load itself

Here's the implementation I'm currently using:

using System;
using System.Collections.Generic;
using System.Linq;
using JetBrains.Annotations;
using Random = UnityEngine.Random;

namespace ZeroAG.WorkInProgress.Resources
    public interface IResource
        ///     Gets direct dependencies of this instance.
        IEnumerable Dependencies { get; }

        ///     Gets the name of this instance.
        string Name { get; }

        ///     Gets any dependencies of this instance that matches a predicate.
        IEnumerable Find((NotNull) Predicate predicate) where T : IResource;

        ///     Loads this instance.
        void Load(IResourceProgress progress) where T : IResourceProgressInfo, new();

    public abstract class Resource : IResource
        protected Resource((NotNull) string name, (NotNull) params IResource() dependencies)
            Name = name ?? throw new ArgumentNullException(nameof(name));
            Dependencies = dependencies ?? throw new ArgumentNullException(nameof(dependencies));

        public IEnumerable Dependencies { get; }

        public string Name { get; }

        public IEnumerable Find(Predicate predicate) where T : IResource
            if (predicate == null)
                throw new ArgumentNullException(nameof(predicate));

            var queue = new Queue(new() {this});

            while (queue.Any())
                var dequeue = queue.Dequeue();

                foreach (var dependency in dequeue.Dependencies)

                if (dequeue is T item && predicate(item))
                    yield return item;

        public virtual void Load(IResourceProgress progress) where T : IResourceProgressInfo, new()
            // just a demo for all derived types

            var count = Random.Range(3, 5);

            for (var i = 0; i < count; i++)
                var value = new T
                    Sender = this,
                    Percentage = 1.0f / (count - 1) * i,
                    Message = $"{GetType().Name}: {i + 1} of {count}"



While IResource.Find Allows me to find a child dependency. Does not allow me to ask for a dependency IResource at the same hierarchical level or above, e.g. At some point, I have to be able to ask questions like the one shown in red, d. H. ask for anything from anywhere,

Well, while it is very tempting, a IResource Parent Property IResource It does not really make sense, as there could be multiple parents, such as for Scene Atlas,

It seems that I'm drifting from a typical tree structure What ever but I can not identify this structure.


What am I looking for and / or how could I IResource be refactored to solve this problem?

How do you solve this partial differential equation for $ f $?

I want Mathematica to solve the function $ f $,

$ f $ meets the following conditions.

$ frac { partial} { partial x} f (x, y) = y $

$ frac { partial} { partial y} f (x, y) = x $

$ f (0,0) = 0 $

It seems obvious that $ f (x, y) = xy $

Nevertheless, I can not persuade Mathematica to return this result.

Here is my try.
Mathematica just gives it back. What do I miss?

       x == D(f(x, y) , y)
     , y == D(f(x, y) , x)
     , f(0, 0) == 0

, {f(x, y), f(x, y)}
, {x, y}

Why can not I solve the limit by excluding the variable?

This is from Spivak's Calculus, Chp. 22, Q2 (vi): View $ lim_ {n rightarrow infty} nc ^ n, | c | <1 $

$$ lim_ {x rightarrow infty} xc ^ x = lim_ {x rightarrow infty} e ^ { log {x}} e ^ {x log {c}} = lim_ {x rightarrow infty} e ^ { log {x} + x log {c}} $$

… then …

$$ lim_ {x rightarrow infty} log {x} + n log {c} = lim_ {x rightarrow infty} x left ( frac { log {x}} {x} + log {c} right) = lim_ {x rightarrow infty} x log {c} = – infty $$

So $ lim_ {x rightarrow infty} xc ^ x = 0 $, Particularly, $ lim_ {n rightarrow infty} nc ^ n = 0 $, Note that he (I believe) used the fact that the limit of a product is the product of the limits in the last step (to solve one limit independently and reduce the limit to two protocols). I thought that this only works if every limit in the product is finite, but I think it's usable here?

Now consider this limit from Q2 (ii):

$ lim_ {n rightarrow infty} n – sqrt {n + a} sqrt {n + b} $, The correct evaluation of this limit requires multiplication by $ frac {n + sqrt {n + a} sqrt {n + b}} {n + sqrt {n + a} sqrt {n + b}} $which ultimately leads to a limit of $ – frac {a + b} {2} $,

But, Why I can not do that (leading to the wrong answer):

$$ lim_ {n rightarrow infty} n – sqrt {n + a} sqrt {n + b} = lim_ {n rightarrow infty} n – sqrt {n ^ 2 + n (a + b) + ab} = lim_ {n rightarrow infty} n left (1 – sqrt {1 + frac {1} {n} (a + b) + frac {ab} {n ^ 2} } right) = lim_ {n rightarrow infty} n (1 – sqrt {1}) = 0 $$

This seems to be well in line with the assumption of the limit of a term used in the first problem presented, and I have solved other thresholds very similarly. Why does this fail?

How do matrices magically solve a simultaneous equation?

I know how to solve a simultaneous equation with matrices.
But I do not understand how the answer comes out suddenly.

Something like that, right? (Attach picture)

What I do not understand:

1. What logic is behind matrix multiplication?

I mean, for me matrix multiplication seems random and arbitrary,

Matrix multiplication

Why not column multiply column or row multiply row.
Is that like a definition of matrix or something?

The idea of ​​multiplication in natural numbers is how often you add something to yourself.
3×3 = 3 + 3 + 3 or 3 groups of 3. What is matrix multiplication?

2. Why can matrices be treated as algebra variables?

After changing the expression to the matrix form, the matrices can be used as if they were variable in the algebra. (I think) How does it work?

AX = C,

X = A ^ (-1) C,
and just find X. Looks like simple algebra.

3. How does the method of Gauss Jordan and Crammer work?

I mean, if I follow the steps, I somehow get the inverse matrix. And if I check that, it seems to be the inverse matrix. How does this work??

I'm sorry if that does not make sense. I tried to ask my friends, and they do not know what I'm talking about.

Simple way to solve the integration

I often have to solve some kind of integral, that is:

$$ int x sin (n_1x) sin (n_2x) dx $$
$ n_1 $ and $ n_2 $ are integers.

where the $ n_1 $ and $ n_2 $ are the whole number.
The way I solve this integral is to break it down $ sin n_1 x sin n_2 x $ first with the formula $ cos C – cos D $ and then take the partial integration.

Is there an easy way to solve this integral?

maximum – Solve a profit maximization with the Cobb-Douglas production function

I consider a typical profit maximization problem:

begin {equation} label {geometry1}
begin {align}
& underset {K, L} { text {max}}
& & P Y – r K – w L \
end {equation}

from where $ r $ is the interest rate and $ w $ is the wage rate.

The production function can be Cobb-Douglas,

$ Y = AK ^ { alpha} L ^ { beta} $

from where $ 0 leq alpha leq 1 $ and $ 0 leq beta leq 1 $, Or it can be a CES production function.

I would like to find the solution for … $ K $ and $ L $ in any case, the production function.

For example, my code for Cobb-Douglas is:

Y = A k^a L^b;
PROF = P Y - r k - w L;
z1 = D[PROF, k];
z2 = D[PROF, L];
Simplify[Solve[{z1 == 0, z2 == 0}, {k, L}], {0 <= a <= 1 && 0 <= b <= 1 && 
A > 0 && k > 0 && L > 0 && P > 0 && r > 0 && w > 0 && PROF > 0}]

And I get a very strange solution like this.
Enter image description here

Can someone help me, what went wrong? Thanks a lot!

Optimization – Does a decision variable that is just an upper limit to other decision variables make it difficult to solve an integer program?

I have several decision variables $ max_s in mathbb {N}, s in mathcal {S} $ These are just upper limits for other decision variables: $$ x_ {i, s} leq max_s forall s in mathcal {S}, forall i in mathcal {I}. $$

I use this strategy only to simplify the determination of the output caps and to simplify the explanation of the model, because these caps are very important for the problem context, but I can also determine the maximum values ​​of the solution $ max_i x_ {i, s} $, I think the solver (with standard branch-and-bound algorithm) does that, but I'm not sure.

Does these variables the upper limit $ max_s, s in mathcal {S} $ Is it more difficult to solve the problem than just finding the maximum value of the solution?