Real analysis – box dimension of the graph of an increasing function

This Hausdorff dimension of the graph of an increasing function shows that:

To let $ f $ a continuous, strictly growing function $[0,1]$ to
with $ f (0) = 0, f (1) = 1 $, Then $ dim_H ; G = 1 $ from where $ G $ is the graph of $ f $,

I have the casino feature on hand, which is described in Massopoust as follows Interpolation and approximation with splines and fractals:

To let $ X = [0,1] times mathbb {R} $, $ N = 4 $ and {(X_v, y_v): 0 = x_0 < ldots x_N = 1, 0 = y_0 < ldots <y_N = 1 } $, Define an IFS by $ f_i (x, y) =
begin {pmatrix}
x_i-x_ {i-1} & 0 \
0 & y_i – y_ {i-1}
{pmatrix}
begin {pmatrix}
x \
y
{pmatrix}
+
begin {pmatrix}
x_ {i-1} \
y_ {i-1}
{pmatrix}
$
to the $ i = 1, lpoints, N $,

The associated RB operator $ T $ is contractive and its unique fixed point is called a casino feature $ c:[0,1] to [0,1]$, These functions are increasing monotonously and therefore $ dim_H ; Graph (c) = dim_B ; Graph (c) = 1 $,

I was wondering how to show it $ dim_B ; Graph (c) = 1 $ and if there is a general argument:

To let $ f $ a continuous, strictly growing function $[0,1]$ to
with $ f (0) = 0, f (1) = 1 $, Then $ dim_B ; G = 1 $ from where $ G $ is the graph of $ f $,

I find no argument supporting $ dim_B ; G le 1 $,

Algorithms – Optimality of the recursive best-first search in the graph search

As mentioned in the answer to this question, the RBFS algorithm extends the nodes in the same order as A *. I think RBFS should not be optimal in the graph search if the graph is legal but inconsistent, like A * is. However, the answer says that RBFS is optimal even in graph search if the graph is only valid. Please answer why this difference is made.

Comparison between IDA * and recursive best first search

Combinatorics – Consequences of a graph without an odd circuit or its complement

I received the following statement:

"If a graphic $ G $ contains no odd circuits $ C_ {2k + 1} $ to the $ k geq $ 2or his supplement, then we have $ omega (G & # 39;) alpha (G & # 39;) geq | V (G & # 39;) | $ for every induced subgraph $ G & # 39; $ from $ G $ "

from where $ omega $ is the clique number, $ alpha $ is the stability number, $ | V | $ is the number of vertices in the graph. This actually shows that if we prove the strong perfect graph set, we already prove the perfect graph set.

I'm trying to prove the claim, and I can show that if there's a cycle $ C_ {2k + 1} $ With $ k geq $ 2, we have $ alpha (C_ {2k + 1}) = k, omega (C_ {2k} +1) = 2 $, The multiplication becomes $ 2k <2k + 1 $ (similar to the supplement). My questions are however:

  1. How can I show this if a subgraph is not $ C_ {2k + 1} $ then we have the condition?
  2. Is there a difference if I use the circuit instead of the cycle?

Double Geometric Double Graph – Mathematics Stack Exchange

I wondered if anyone could show me how to draw a double geometry double graph. I know how to draw the first dual graphics $ G ^ * $ However, I am not sure how to take a second dual graphic $ G ^ {**} $,

Is it enough to prove this face? $ G ^ * $ must not contain more than one vertex of $ G $ then $ n ^ {**} $ = $ f ^ * $ = $ n $, from where $ n ^ {**} $ is the number of nodes of $ G ^ {**} $?

And finally it is possible to reverse and find the process $ G $ from $ G ^ * $?

Note: $ G $ is a coherent planar graph

Coloring – Chromatic polynomial of a simple, unconnected graph

I work in the following graphic.

Prove that, if $ G $ is a separate simple graph, then its chromatic polynomial $ P_c (k) $ is the product of the chromatic polynomials of their components. What can you say about the lowest non-disappearing term?

I'm thinking about calculating the chromatic polynomial for each separate component from separate $ G $, then $ P_c (k) $ That would be the product, but I'm not at all sure about this thought. I am also not sure what "lowest non-disappearing term" means. Thanks in advance for any hint or help.