Number theory proof with integers

There is a positive integer n. Consider all numbers $ left lfloor {n over k} right rfloor $ Where $ k $ is a positive integer. Show that there is not more than $ 2 sqrt {n} + 1 $ such numbers that differ from each other.
Note: For a real number $ x, $ by $ lfloor x rfloor $ we do not mean the largest integer greater than $ x. $

Number Theory – Representation of $ k = S (a) -S (b) $ where $ b ne a + 1 $

To let $ a, b $ are positive integers.


$ S (a) = 1 + 2 + … + a $

and $ S (b) = 1 + 2 + … + b $


Show everyone $ k in mathbb {N} $ and $ k ne 2 ^ t $ $ forall t in {0, mathbb {N} } $ can be represented as

$$ k = S (a) -S (b) $$ Where $ b ne a + 1 $

My attempt

To the $ k> 1 $ and $ k $ is then odd number $ k $ can pose as

$ k = 2r + 1 = r + (r + 1) = S (r + 1) -S (r-1) $

Well proof for, if $ k = 2 ^ t $ then $ k ne S (a) -S (b) $


Let's assume $$ k = n + (n + 1) + … + (n + u) $$

Where $ u ge 1 $

$$ = sum_ {i = 0} ^ {u} n + i $$

$$ = frac {(u + 1) (2n + u)} {2} = 2 ^ t $$

In order to $$ (u + 1) (2n + u) = 2 ^ {t + 1} $$

case 1

if $ u $ is then odd $ u + 1 = $ even and $ 2n + u = $ odd so even × odd $ ne 2 ^ {t + 1} $ there $ 2 ^ {t + 1} $ Content only a multiple.


if $ u $ is also then $ u + 1 = $ odd and $ 2n + u = $ just so odd × straight $ ne 2 ^ {t + 1} $ similar to case 1

Thus, both cases show complete proof of $ k ne 2 ^ t $


$ 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, $

$ 12 = 3 + 4 + 5, 14 = 2 + 3 + 4 + 5, $

$ 18 = 5 + 6 + 7, 20 = 2 + 3 + 4 + 5 + 6 $

$ 22 = 6 + 7 + 8, 24 = 7 + 8 + 9, $

$ 26 = 5 + 6 + 7 + 8, … $

You can check similar problems here

Simplification of Expressions – Disappear with the derived terms in perturbation theory

After the question asked here I would now like to make the terms disappear with derivatives. To explain briefly, if a variable is decomposed as follows:

$ a (t, r) = a (r) + delta a (t, r) $

in my long calculations several derivations of the term $ Delta a (t, r) $ appears like $ partial_t delta a (t, r) $ or $ partial_r delta a (t, r) $, But I need that multiplied $ Delta $Conditions to disappear with another, like

$ partial_r delta a (t, r) partial_r delta b (t, r) = 0 $
$ ( partial_r delta a (t, r)) ^ 2 = 0 $
$ partial_r delta a (t, r) partial_t delta a (t, r) $,

In my previous question, the method shown could only answer if there were no derived terms, and I would like to generalize it. If I follow the method shown, Mathematica takes the derivative of the variable $ Delta a (t, r) $ as:

partial_r delta(a(t,r)) = delta'(a(t,r)) partial_r a(t,r).

after the chain rule. And, of course, there will be no zero if two terms of this type are multiplied.

Set theory – Is there a characterization of the $ omega ^ omega $ basis in the context of $ S_ omega $?

For a topological space $ X $ and one point $ x in X $We call the cofinal type of neighborhood bases of $ x $ are cofinally finer than $ omega ^ omega $-Base, if for a neighborhood base $ mathfrak {N} $ from $ x $ There is a converged map $ f: mathfrak {N} rightarrow omega ^ omega $ for any $ alpha in omega ^ omega $ it exists $ A in mathfrak {N} $ so that $ alpha leq f (B) $ for each $ B subset A $,

If the cofinale type of neighborhood bases of $ x $ are cofinally finer than $ omega ^ omega $-base, is there an embedding? $ h: S_ omega rightarrow (X, x) $ With $ h ( infty) = x $? Where $ S_ omega $ is the sequential fan and $ infty $ is the unique non-isolated point in $ S_ omega $,

Thank you in advance.

Complexity Theory – How to prove the NP hardness from the ground up?

I'm working on a problem whose complexity is unknown.
Due to the nature of the problem I can not use long edges, so 3SAT and variants are almost impossible to use.

Finally, I chose the most primitive method – Turing Machines.

Strangely, I could not find an example of the reduction in NP hardness that was directly achieved by modeling the problem as language. It turned out that a deterministic Turing machine can not decide if a given instance belongs to that language (I might have misunderstood the language) terminology here).

Assuming that there is no problem performing NP hardness reduction, how can one prove that a problem is NP hard? Are there any publications that do this?

relational theory – design problem with transitive dependencies between tables

This is a fictional scenario, but I imagine it may be of general interest, so I'll post it here. Imagine the following business rule:

  1. A course has a grading scale that determines what grades a student can receive for their diploma

How would this BR be implemented (no procedure logic) while preserving 3NF? The DBMS does not support general expressions in CHECK constraints, so we are limited to line expressions

A naive approach would be something like:

create table gradesscales
( gradescale_id int not null primary key );

create table grades
( grade char(1) not null primary key );

create table grades_in_gradescales
( gradescale_id int not null
    references gradesscales (gradescale_id)
, grade char(1) not null
    references grades (grade)
, primary key (gradescale_id, grade)

create table courses
( course_code char(5) not null primary key
, gradescale_id int not null
    references gradesscales (gradescale_id)

create table diplomas
( student_no int not null
, course_code char(5) not null
    references courses (course_code)
, grade char(1)
    references grades (grade)
, primary key (student_no, course_code)

insert into gradesscales (gradescale_id) values (1),(2);

insert into grades (grade) values ('1'),('2'),('3'),('A'),('B');

insert into grades_in_gradescales (gradescale_id, grade)
values (1,'1'),(2,'2'),(1,'3'),(2,'A'),(2,'B');

insert into courses (course_code, gradescale_id)
values ('MA101', 1),('FY201', 2);

So far so good, but nothing prevents us from adding a note from a grading scale that is not related to the course:

insert into diplomas (student_no, course_code, grade)
values (1,'MA101','B');

A pragmatic approach is to add gradescale_id to diplomas and reference grades_in_gradescales instead of grades:

alter table diplomas add constraint C
    add column gradescale_id int not null
    -- drop foreign key against grades
    add foreign key (gradescale_id, grade)
          references grades_in_grade_scales (gradescale_id, grade);

but I'm not very happy about that. Other thoughts?

I could not mark that SQLSo I used relational theory instead

Complexity Theory – Interesting analogy, why P! = NP with a labyrinth

I am currently taking an algorithmic course and have just come across NP-Complete in terms of computational complexity. Apparently the topic was P! = NP a topic that has been heavily explored without current evidence. In class we should discuss topic-related ideas.

I answered a classmate and thought about an interesting possible reason. Naively I try to say that I have an answer to P! = NP found. A bit down to earth, I find it just too exciting to think about the topic and would like to share my thoughts with you, in the hope of getting an insight into whether my thoughts can be expanded or simply let go.

So the classmate mentioned that there was the case of gerrymandering, which was tackled using graph theory, and the problem itself is NP-heavy. I thought that these types of graph theory applications are intuitively quite difficult to solve and fit into the NP-complete / NP-hard category.

The analogy I mentioned was running through a labyrinth trying to get from point A to point B. It's kind of intuitive that given the random sequence of turns and dead ends, there's no algorithmic way to get through the entire labyrinth. It's like trying to cross a boundary between space and time when the two are closely connected in the example of the labyrinth.

Even more abstractly, it is impossible to move from "space" at the beginning of the maze to "space" at the end of the maze without having to go through a significant amount of "time". Since space-time is bound together, it would be physically impossible to go through the entire maze in a polynomial way.

If you have really read the whole thing, thanks! If you can tell me something, I thank you too.

Number Theory – Is an integer sum of periodic vectors always a sum of integer periodic vectors?

The background to this question is something about cyclotomic fields, but the statement does not include algebraic number theory. I'm just confused about this (maybe stupid) little question …

To let $ n> 1 $ Be an integer and look at the vector space $ mathbb {C} ^ n $,

A vector $ v = (v_1, cdots, v_n) in mathbb {C} ^ n $ is called

  • periodicallyif there is a right divider $ d $ from $ n $, so that $ v_i = v_ {i + d} $ for all $ i $;
  • integralif all $ v_i $ is an integer.

Question: If an integral vector can be written as a finite sum of periodic vectors, it is true that it can always be written as a finite sum of vectors integral Period vectors?

It should be clear that the field $ mathbb {C} $ could be replaced by any field of characteristic zero (e.g. $ mathbb {Q} $).

I would suggest that the claim is true but can not convince me with a proof …

So far I can only prove the case if $ n $ has at most $ 2 $ different prime factors, which does not help much in the general case.

I also tried to take a viewpoint from cyclotomic fields or even from representation theory – but again, I am not intelligent enough to turn the question into something familiar …

That's why I add all the potentially relevant tags.

Are there any dark numbers in set theory?

There are more real numbers (uncountable, many) than finite definitions (countable, many). This shows that some real numbers can not have finite definitions.

A similar argument shows that there are also natural numbers that have no finite definition, so-called dark numbers. They are needed in set theory.

To let $ E_n = {n, n + 1, n + 2, … } $ be that $ n $End section of $ mathbb {N} $, Then

$ forall n in mathbb {N}: E_1 cap E_2 cap … cap E_n neq emptyset $, (1)

The infinite overlap, however, gives

$ E_1 cap E_2 cap E_3 cap … = emptyset $, (2)

An infinite set contains more elements than all finite sets. It is therefore a valid question which end segments make the intersection in (2) intersected infinite and the result empty.

It is impossible to define an end segment that is in (2), but not in (1). However, it is not sufficient to observe that every finite set of end segments in (1) has a missing end segment $ E_ {n + 1} $ because that too can not heal

$ E_1 cap E_2 cap … cap E_n cap E_ {n + 1} neq emptyset $,

It seems that only dark numbers that are not subject to universal quantification in (1) can contribute to the infinite amount of end segments necessary to produce the blank cut.

Which alternative would be possible?

Complexity Theory – Convex quadratic approximation to binary linear programming

Munapo (2016, American Journal of Operations Research, major.2016.61001) provides proof that binary linear programming is solvable in the polynomial time and therefore P = NP.

It is not surprising that it does not really show that.

His results are based on a convex quadratic approach to the problem with a penalty whose weight $ mathcal {l} $ must be infinitely large for the approximation to correct the true problem.

My questions are the following:

  1. Is this an approximation that already existed in the literature (I guess that was the case)?
  2. Is this approach useful in practice? For example, one could solve a mixed integer linear programming problem by continuing homotopy and gradually increasing the weight $ mathcal {l} $?

Note: After writing this question, I discovered this related question: time complexity of binary linear programming. The related question considers a specific problem of binary linear programming, but mentions the above paper.