## Discrete Mathematical Programming Language

Which of the following statement(s) is true?

``````(∀x ∀y P(x, y)) ≣ (∀y ∀x P(x, y))
(∃x ∃y P(x, y)) ⊨ (∃y ∃x P(x,y))
(∃x ∀y P(x, y)) ⊨ (∀y ∃x P(x,y))
(∃x ∀y P(x, y)) ≣ (∀y ∃x P(x,y))
(∀x ∀y P(x, y)) ⊨ (∀y ∀x P(x, y))
``````

“Everyone who likes coding will be asked by some people to help. They will become welcomed (or become an ‘errand boy’ in the end), always being asked for help”. The first-order logic of the sentence is written below: ∀x ( ∃y Coding(x) ∧ AskHelp(x, y) ) ⇒ ( ∃y Welcomed(x) ⋁ AskHelp(x, y) )Convert this sentence into CNF. Which of the following clauses is in that CNF?(F(x) is a Skolem function)

``````( ¬Coding(x) ⋁ AskHelp(x, y) )
( Welcomed(x) ⋁ AskHelp(x, F(x)) )
( ¬Coding(x) ⋁ ¬AskHelp(x, y) )
``````

Consider the Horn KBs: SpeakChinese(FatherOf(x)) ⇒ SpeakChinese(x) LiveInTaiwan(x) ⇒ SpeakChinese(x) SpeakChinese(Li) LIveInTaiwan(Su)Where x is variable, Li, Su are constants, and FatherOf is a function. Suppose we use a “breadth-first” forward chaining algorithm, repeatedly adds consquences of current satisfied rules, and we use a “depth-first” backward chaining algorithm that tries clauses in the order as the sentences list above about the KB. Which of the following statement(s) is true?

The forward chaining will infer the result SpeakChinese(Su)
Given the query SpeakChinese(Su), The backward chaining will loop forever
The forward chaining will infer the result LiveInTaiwan(Li)
If the forward chaining can not infer a query, it does not mean it can not be intailed by the KB
If the backward chaining does not return True for a given query, then it is not entailed by the KB

## visualization – How can I change the Mathematica programming for creating a Venn Diagram?

I am working on using Venn diagram to explain logic and logical connections between sets. After coming upon these answers: Create a Venn Diagram,
How to plot Venn diagrams with Mathematica?,
which I found very helpful I was wondering if there is a way to convert the statement All Bananas are Tasty. (All Bananas are elements of Tasty or Bananas $$iff$$ Tasty) and All Apples are Tasty (Apples $$iff$$ Tasty) into a Venn diagram with three circles labeled?

The difference between this question and the questions above is I am wondering how to label an area of the Venn diagram, as well as how to instead of using $$A_1$$ and $$A_2$$ as values as explained in this good well-explained answer https://mathematica.stackexchange.com/a/2557/76873 by user https://mathematica.stackexchange.com/users/495/fjra use the values Bananas, Apples, and Tasty.

I am also wondering if there is a way to color the Venn diagram, possibly using PlotStyle or a graphics primitive?

## dynamic programming – How to create a subset with a given length and mean?

For every $$i in {0,ldots,|P|}$$, $$j in {0,ldots,|S|}$$, and $$T in {0,ldots,mathcal{T}}$$ (I discuss $$mathcal{T}$$ below), compute whether there is a subset of size $$j$$ of $$p_1,ldots,p_i$$ which sums to $$T$$. You should take $$mathcal{T}$$ to be your maximum allowable answer – $$max(P) cdot |S|$$ would definitely do, but you can probably pick something which is closer to $$mu_S |S|$$, say $$mu_P |S|$$. The running time is $$O(|P| |S| mathcal{T})$$. In your case, assuming you choose $$mathcal{T} approx mu_p |S|$$, then $$|P| |S| mathcal{T} approx 2^{40}$$, so this is barely feasible.

This is a naive dynamic programming algorithm, which probably can be improved, especially since you’re only looking for an approximation.

For example, you could only look for solutions such that $$S cap {0,ldots,i} approx frac{|S|}{|P|} i$$ and $$sum(S cap {0,ldots,i}) approx frac{|S|}{|P|} mu_S$$, which will reduce the running time significantly. While this is only a heuristic, it might be possible to show that you get a decent approximation on average, if you randomize the order of $$P$$.

Another possible optimization is to quantize the partial sums, quantizing more aggressively as $$i$$ gets larger. If done carefully, you won’t lose much in accuracy but will have a significant gain in running time.

## type theory – Curry–Howard correspondence and functional programming “reliability”

The first time I heard about functional programming, someone told me “it’s more reliable to code in a functional style because your type system is like a proof of correctness”.

I recently learnt about the Curry-Howard (CH) correspondence and I think this is what he was using as a basis for his assertion about functional languages.

However, I have trouble understanding how far this leads to “more reliable programs”.

Especially, here is my understanding:

• In the CH view, a function type `A -> B` is the same as an implication $$A implies B$$ in some constructive view of logic, and so on… (union type, product of type, etc.)
• So if we end up being able to build an instance `t` of a type `T` it means we used only the available function types / implications using the available variables / assumptions.

Still, there are many ways a program written this way could be incorrect :

• If I have several variables `a`, `b`, `c`, … of the same type `A` available, I may use a transformation `A -> B` on the wrong one.
• I may have different functions with the same signature `A -> B` but different use cases, and the type system won’t be able to detect if I use the wrong one.

My questions are:

• In practice, do functional programming languages enforce things like “there should be no more than one instance of a type” or some variant to avoid those kinds of errors ?
• Is there a theoretical background in which we enforce more restrictions on functional languages and give the “more reliable” a stronger meaning ?

## parallel programming – What is the most fitting thread model to redesign a sequential linear search algorithm?

Let’s say that there is a simple sequential algorithm that performs a linear search of a unique key in an array:

``````public class SearchSeq {

public static int search(int() a, int key) {
for (int i = 0; i < a.length; i++) {
if(a(i) == key)
return i;
}
return -1;
}

public static void main(String() args) {

int() a = …; // an array with n elements
int key = …; // the key to be searched within a

long start = System.currentTimeMillis();

int pos = search(a, key);

long stop = System.currentTimeMillis();
long duration = stop - start;

if(pos > 0)
System.out.print("found " + key + " at " + pos);
else

System.out.println(" within " + duration + "ms");
}
}
``````

What will be the most fitting thread model in order to redesign the algorithm to run in parallel?

In my opinion the most fitting thread model would be the Master/Worker because in this way we would divide the array into segmenets and search in parallel inside of each segment for the key. Smaller array size -> faster results.

What do you think?

## programming challenge – Advent of Code 2020 – Day 3: tobogganing down a slope

I decided to take a shot at Advent of Code 2020 to exercise my Rust knowledge. Here’s the task for Day 3:

(…)

Due to the local geology, trees in this area only grow on exact
integer coordinates in a grid. You make a map (your puzzle input) of
the open squares (`.`) and trees (`#`) you can see. For example:

``````..##.......
#...#...#..
.#....#..#.
..#.#...#.#
.#...##..#.
..#.##.....
.#.#.#....#
.#........#
#.##...#...
#...##....#
.#..#...#.#
``````

These aren’t the only trees, though; due to something you read about
once involving arboreal genetics and biome stability, the same pattern
repeats to the right many times:

``````..##.........##.........##.........##.........##.........##....... --->
#...#...#..#...#...#..#...#...#..#...#...#..#...#...#..#...#...#..
.#....#..#..#....#..#..#....#..#..#....#..#..#....#..#..#....#..#.
..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#
.#...##..#..#...##..#..#...##..#..#...##..#..#...##..#..#...##..#.
..#.##.......#.##.......#.##.......#.##.......#.##.......#.##..... --->
.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#
.#........#.#........#.#........#.#........#.#........#.#........#
#.##...#...#.##...#...#.##...#...#.##...#...#.##...#...#.##...#...
#...##....##...##....##...##....##...##....##...##....##...##....#
.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.# --->
``````

You start on the open square (`.`) in the top-left corner and need to
reach the bottom (below the bottom-most row on your map).

The toboggan can only follow a few specific slopes (you opted for a
cheaper model that prefers rational numbers); start by counting all
the trees
you would encounter for the slope right 3, down 1:

From your starting position at the top-left, check the position that
is right 3 and down 1. Then, check the position that is right 3 and
down 1 from there, and so on until you go past the bottom of the map.

The locations you’d check in the above example are marked here with
`O` where there was an open square and `X` where there was a tree:

``````..##.........##.........##.........##.........##.........##....... --->
#..O#...#..#...#...#..#...#...#..#...#...#..#...#...#..#...#...#..
.#....X..#..#....#..#..#....#..#..#....#..#..#....#..#..#....#..#.
..#.#...#O#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#
.#...##..#..X...##..#..#...##..#..#...##..#..#...##..#..#...##..#.
..#.##.......#.X#.......#.##.......#.##.......#.##.......#.##..... --->
.#.#.#....#.#.#.#.O..#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#
.#........#.#........X.#........#.#........#.#........#.#........#
#.##...#...#.##...#...#.X#...#...#.##...#...#.##...#...#.##...#...
#...##....##...##....##...#X....##...##....##...##....##...##....#
.#..#...#.#.#..#...#.#.#..#...X.#.#..#...#.#.#..#...#.#.#..#...#.# --->
``````

In this example, traversing the map using this slope would cause you
to encounter `7` trees.

Starting at the top-left corner of your map and following a slope of
right 3 and down 1, how many trees would you encounter?

(…)

### Part Two

Time to check the rest of the slopes – you need to minimize the
probability of a sudden arboreal stop, after all.

Determine the number of trees you would encounter if, for each of the
following slopes, you start at the top-left corner and traverse the
map all the way to the bottom:

• Right 1, down 1.
• Right 3, down 1. (This is the slope you already checked.)
• Right 5, down 1.
• Right 7, down 1.
• Right 1, down 2.

In the above example, these slopes would find `2`, `7`, `3`, `4`, and
`2` tree(s) respectively; multiplied together, these produce the
answer `336`.

What do you get if you multiply together the number of trees
encountered on each of the listed slopes?

The full story can be found on the website.

src/day_3.rs

``````use {
anyhow::{anyhow, bail, ensure, Result},
itertools::Itertools,
ndarray::prelude::*,
std::io::{self, prelude::*},
};

pub const PATH: &str = "./data/day_3/input";

#(derive(Clone, Copy, Debug, Eq, Hash, PartialEq))
pub enum Pixel {
Empty,
Tree,
}

impl Pixel {
pub fn from_char(c: char) -> Result<Self> {
match c {
'.' => Ok(Self::Empty),
'#' => Ok(Self::Tree),
_ => bail!("invalid pixel"),
}
}
}

#(derive(Clone, Debug, Eq, Hash, PartialEq))
pub struct Terrain {
pixels: Array2<Pixel>,
}

impl Terrain {
}

pub fn slope_count(&self, delta_x: usize, delta_y: usize) -> usize {
assert!(delta_y != 0, "delta_y is zero");

let pixels = &self.pixels;

(0..pixels.nrows())
.step_by(delta_y)
.zip((0..).step_by(delta_x).map(|x| x % pixels.ncols()))
.filter(|pos| pixels(*pos) == Pixel::Tree)
.count()
}
}

#(derive(Debug))
struct TerrainParser {
pixels: Vec<Pixel>,
width: usize,
height: usize,
}

impl TerrainParser {
fn parse<R: BufRead>(mut lines: io::Lines<R>) -> Result<Terrain> {
let first_line =
lines.next().ok_or_else(|| anyhow!("empty terrain"))??;
let mut parser = Self::parse_first_line(&first_line)?;

for line in lines {
parser = parser.parse_line(&line?)?;
}

let TerrainParser {
pixels,
width,
height,
} = parser;

Ok(Terrain {
pixels: Array2::from_shape_vec((height, width), pixels)?,
})
}

fn parse_first_line(line: &str) -> Result<Self> {
let pixels: Vec<_> =
line.chars().map(Pixel::from_char).try_collect()?;

let width = pixels.len();
ensure!(width != 0, "zero-width terrain");

Ok(Self {
pixels,
width,
height: 1,
})
}

fn parse_line(mut self, line: &str) -> Result<Self> {
let expected_len = self.pixels.len() + self.width;
self.pixels.reserve_exact(self.width);

itertools::process_results(
line.chars().map(Pixel::from_char),
|pixels| self.pixels.extend(pixels),
)?;
ensure!(self.pixels.len() == expected_len, "jagged terrain");

self.height += 1;
Ok(self)
}
}

#(cfg(test))
mod tests {

#(test)
fn pixel_from_char() {
assert_eq!(Pixel::from_char('.').unwrap(), Pixel::Empty);
assert_eq!(Pixel::from_char('#').unwrap(), Pixel::Tree);
assert!(Pixel::from_char(' ').is_err());
}

#(test)
fn terrain_parse_from() -> anyhow::Result<()> {
fn parse(input: &str) -> Result<Terrain> {
}

let expected = Array2::from_shape_vec(
(3, 3),
(Pixel::Empty, Pixel::Tree)
.iter()
.copied()
.cycle()
.take(9)
.collect(),
)?;
assert_eq!(parse(".#.n#.#n.#.n")?.pixels, expected);

assert!(parse("").is_err());
assert!(parse(". #").is_err());
assert!(parse(".n##").is_err());

Ok(())
}

#(test)
fn terrain_slope_count() -> anyhow::Result<()> {
// .#.
// #.#
// .#.
// #.#
let pixels = Array2::from_shape_vec(
(4, 3),
(Pixel::Empty, Pixel::Tree)
.iter()
.copied()
.cycle()
.take(12)
.collect(),
)?;
let terrain = Terrain { pixels };

assert_eq!(terrain.slope_count(1, 1), 1);
assert_eq!(terrain.slope_count(2, 1), 3);
assert_eq!(terrain.slope_count(3, 1), 2);
assert_eq!(terrain.slope_count(1, 2), 1);

Ok(())
}
}
``````

src/bin/day_3_1.rs

``````use {
anyhow::Result,
aoc_2020::day_3::{self as lib, Terrain},
};

fn main() -> Result<()> {

let count = Terrain::parse_from(file)?.slope_count(3, 1);
println!("{}", count);

Ok(())
}
``````

src/bin/day_3_2.rs

``````use {
anyhow::Result,
aoc_2020::day_3::{self as lib, Terrain},
};

const SLOPES: &((usize; 2)) = &((1, 1), (3, 1), (5, 1), (7, 1), (1, 2));

fn main() -> Result<()> {
let terrain = Terrain::parse_from(file)?;

let product: usize = SLOPES
.iter()
.map(|&(delta_x, delta_y)| terrain.slope_count(delta_x, delta_y))
.product();
println!("{}", product);

Ok(())
}
``````

Crates used: `anyhow` 1.0.37 `itertools` 0.10.0 `ndarray` 0.14.0

`cargo fmt` and `cargo clippy` have been applied.

## lo.logic – Defining squares in integer linear programming

1. Is the following definition of squares allowed (may not be know but wanted to know if it u not disallowed) in integer linear programming (Presburger)?

$${zinmathbb Z:exists x,x_1,dots,x_tinmathbb Zmbox{ such that } A(x,x_1,dots,x_t,z)’leq bwedge z=x^2}$$

where $$t$$ is a constant and $$A$$ is a fixed matrix with rational entries and of constant number of rows and columns and $$b$$ is a rational vector with constant number of rows.

1. If not then to define all squares upto $$2^r$$ how many rows and columns you need in $$A$$?

## programming languages – Why didn’t == operator string value comparison make it to Java?

Consistency within the language. Having an operator that acts differently can be surprising to the programmer. Java doesn’t allow users to overload operators – therefore reference equality is the only reasonable meaning for `==` between objects.

Within Java:

• Between numeric types, `==` compares numeric equality
• Between boolean types, `==` compares boolean equality
• Between objects, `==` compares reference identity
• Use `.equals(Object o)` to compare values

That’s it. Simple rule and simple to identify what you want. This is all covered in section 15.21 of the JLS. It comprises three subsections that are easy to understand, implement, and reason about.

Once you allow overloading of `==`, the exact behavior isn’t something that you can look to the JLS and put your finger on a specific item and say “that’s how it works,” the code can become difficult to reason about. The exact behavior of `==` may be surprising to a user. Every time you see it, you have to go back and check to see what it actually means.

Since Java doesn’t allow for overloading of operators, one needs a way to have a value equality test that you can override the base definition of. Thus, it was mandated by these design choices. `==` in Java tests numeric for numeric types, boolean equality for boolean types, and reference equality for everything else (which can override `.equals(Object o)` to do whatever they want for value equality).

This is not an issue of “is there a use case for a particular consequence of this design decision” but rather “this is a design decision to facilitate these other things, this is a consequence of it.”

String interning, is one such example of this. According to the JLS 3.10.5, all string literals are interned. Other strings are interned if one invokes `.intern()` on them. That `"foo" == "foo"` is true is a consequence of design decisions made to minimize the memory footprint taken up by String literals. Beyond that, String interning is something that is at the JVM level that has a little bit of exposure to the user, but in the overwhelming vast majority of cases, should not be something that concerns the programmer (and use cases for programmers wasn’t something that was high on the list for the designers when considering this feature).

People will point out that `+` and `+=` are overloaded for String. However, that is neither here nor there. It remains the case that if `==` has a value equality meaning for String (and only String), one would need a different method (that only exists in String) for reference equality. Furthermore, this would needlessly complicate methods that take Object and expect `==` to behave one way and `.equals()` to behave another requiring users to special case all those methods for String.

The consistent contract for `==` on Objects is that it is reference equality only and that `.equals(Object o)` exists for all objects which should test for value equality. Complicating this complicates far too many things.

## functional programming – Sum of numbers divisible by 3 in Prolog

Give two Prolog implementations to calculate the sum of first N numbers divisible by 3. For example: N=4 -> 30 (3+6+9+12).

``````sum(N,I,C,C):-!.
sum(N,I,C,R):-
I mod 3 =:= 0,
C1 is C+I,
I1 is I+1,
sum(N,I1,C1,R).
sum(N,I,C,R):-
not(I mod 3 =:= 0),
I1 is I+1,
sum(N,I1,C,R).

``````

I tried like this, but it’s not a good way of thinking. Can somebody help me, please?

## procedural programming – The max memory used in the loop calculation

The max memory used is apparently different during the first and the second running of an identical do-loop. Following is an easy example,

``````In[1]:= tmp = 0;
AbsoluteTiming[Do[tmp = tmp + i, {i, 10^8}];]
MaxMemoryUsed[]

Out[2]= {33.7427,Null}

Out[3]= 68741504

In[4]:= tmp = 0;
AbsoluteTiming[Do[tmp = tmp + i, {i, 10^8}];]
MaxMemoryUsed[]

Out[5]= {33.0666,Null}

Out[6]= 116270120.
``````

Furthermore, in some other cases I can do the calculation by the while-loop but the do-loop will exhaust my memory. The Max memory used in the do-loop is even more than twice of the cost in the while-loop.

What is the principle of the memory use during the loop-calculation of mathematca? My mma version is 12.0