turing completeness – How do we define the term “computation” across models of computation?

How do we define the term computation / computable function generically across models of computation?

Beginning with the textbook definitions of: {Linz, Sipser and Kozen} for “computable function”.

A function f with domain f is said to be Turing-computable of just computable if there exists some Turing machine M = (Q, Σ, Γ, δ, q0, □, F)
such that q0w ⊢* Mqₘf(w), qₘ ∈ F, for all w ∈ D (Linz:1990:243)

Computable Functions
A Turing machine computes a function by starting with the input to the function on the tape and halting with the output of the function on the tape. (Sipser:1997:190)

Definition 5.12
A function f: Σ* → Σ* is a computable function if some Turing machine M, on every input w, halts with just f(w) on its tape. (Sipser:1997:190)

A partial function σ: Σ* → Σ* is said to be a computable function if σ(x) = M(x) for some Turing machine M. (Kozen:1997:347)

I need to have the term computation / computable function defined generically across models of computation so that I can know whether or not a specific C function is a computation / computable function. I have a C function that uses static local data and some reviewers say that this does not count as a computation / computable function.

Linz, Peter 1990. An Introduction to Formal Languages and Automata. Lexington/Toronto: D. C. Heath and Company.

Sipser, Michael 1997. Introduction to the Theory of Computation. Boston: PWS Publishing Company

Kozen, Dexter 1997. Automata and Computability. New York: Springer-Verlag.

c++ – Is it best practice to define a member function directly in a class?

When a function is defined as part of the class definition, it is implicitly inline (regardless of whether you use that keyword). This has a number of consequences.

  • The function may be inlined by the compiler, potentially but not necessarily resulting in a faster program.
  • Thus, different compilation units can get different copies of this function.
  • The One Definition Rule (ODR) does not apply, which usually requires that each function or object has only one definition in the entire program. Instead, you promise that all copies of the function are equivalent. (This could be violated if you change the function in the header, but don’t recompile all compilation units that include the header).
  • Changing an inline function breaks ABI compatibility, requiring recompilation.

Whether the function is defined as part of the class definition, or as an inline function outside of the class definition but still within the header is equivalent:

class C {
    int func();
}

inline int C::func() { return ... ; }

Now when we put the function definition into a separate compilation unit, we have a different set of consequences:

  • The function cannot be inlined (unless your compiler does link-time optimization) which might be slightly less efficient.
  • The function is only defined in that compilation unit. To call the function from other compilation units, the object code has to be linked by the compiler.

The large practical difference is what happens when you modify this function.

  • For an inline function, you have to recompile all code that uses this definition.
  • For a non-inline function, you only have to compile that one compilation unit that defines it, and then re-link with other compilation units.

On large projects, avoiding recompilation is very important in order to enable fast feedback when making changes. This leads to various strategies:

  • keep header files as minimal as possible, possibly have multiple header files so that dependent compilation units can only pull in those declarations they need
  • prefer defining functions in separate compilation units
  • if compile time performance and ABI stability are more important than run time performance, the pImpl idiom can be used

However, inline functions still have their uses:

  • the function definition is trivial, unlikely to change, and should be inlined, e.g. getters or setters
  • templates are inline
  • the function should be available for inlining due to performance reasons

If you want to enable inlining optimizations within a compilation unit, declaring it as inline is not necessary or appropriate. It is more important that the function has “internal linkage”. For example, free functions (that are not class members) have internal linkage when declared static. The contents of an anonymous namespace have internal linkage. This is useful for helpers within a .cpp file.

If performance and compilation times and ABI stability are no concern, then defining your functions as part of the class definition is perfectly fine. This is the case in many smaller projects that are internally used libraries or executables. Some people prefer having the function definitions right in the class. Other people prefer keeping the definitions separate, so that it’s easier to get an overview of all members of the class.

This answer applies to all versions of the C++ standard, with the caveat that the C++ 20 module system changes this. If a function is defined within a class definition that is part of a module, it will not be considered inline.

lisp – Scheme allow uppercase define

To define a square in scheme I can do the following:

(define (square x) (* x x)

However, if I use a capital DEFINE, I get an invalid identifier:

(DEFINE (square x) (* x x)

Is there a way to do something like the following?

#lang sicp
; using DrRacket
(define (DEFINE) define) ; sort of a macro where DEFINE expands to define

apache spark – Is there a way to define pure SQL UDF’s that still take advantage of all of Pyspark’s optimizations?

I’m repeating a lot of code that looks like the following:

trim(concat(ifnull(`field1`, ''), ' ', ifnull(`field2`, ''))) as my_field

It would be nice to be able to define a function called trim_and_concat that takes an arbitrary number of fields, concatenates them, and trims the result. I could write a UDF, but then I would lose out on all the PySpark optimization.

Is it possible to define a function that combines native SparkSQL methods such that the typical loss of opitimization associated with UDFs is avoided?

I know about the create_function syntax, but as far as I can tell this is just another way to create UDFs, and still requires that the functions be written in scala or Python.

replacement – How to define an index list of local variables for arbitrarily long

Suppose initially I have a list that looks like

list1 = {A, B, C}

where the elements A, B, C are all matrices. I want to substitute all the three elements by list1 itself

list2 = ReplacePart[list1, {i_} -> list1]

which gives me something that looks like

list2 = {{A, B, C}, {A, B, C}, {A, B, C}}

And then, I want to do this another time, substitute all elements in list2 by list1, then I say

list3 = ReplacePart[list2, {i_, j_} -> list1]

Eventually, suppose I want to do this for 10 times. At the end of the day, I have to write a list of 10 local variables

{i_, j_, k_, l_, m_, n_, ...}

My question is, how to define the index list of local variables as arbitrarily long? So that I can just tell Mathematica the length of the list and I don’t have to write them out explicitly.

Thank you so much!

symbolic – How to define distributivity of CenterDot on bras and kets

I am working on a code for a coupled quantum harmonic oscillator and found myself in a hiccup when trying to evaluate the inner product of linear combinations of bras and kets. I have initially defined the CenterDot between a bra and a ket as follows:

CenterDot(
  Bra(n_Integer, m_Integer, o_Integer, p_Integer, q_Integer, 
   r_Integer), 
  Ket(n1_Integer, m1_Integer, o1_Integer, p1_Integer, q1_Integer, 
   r1_Integer)) := 
 KroneckerDelta(n - n1) KroneckerDelta(m - m1) KroneckerDelta(
   o - o1) KroneckerDelta(p - p1) KroneckerDelta(
   q - q1) KroneckerDelta(r - r1)

Although this works for most of my code, I am having trouble extending this definition to make the CenterDot distributive. I was able to define a distributive function of one variable in the same code by doing as follows (thanks to another thread) and it seemed to work:

Fac(x__Plus) := Distribute(Unevaluated(Fac(x)));

I tried using something similar for CenterDot but I am unsure as how to specify that the data type is composed of bras and kets or if this even works theoretically.

Also Expand and Distribute work well at simplifying the linear combinations of bras and kets but seem to stop when they see parenthesis on either side of the CenterDot. For example:

Expand(CenterDot(2*Bra(1, 0, 0, 0, 0, 0) + 1/2*Bra(0, 1, 0, 0, 0, 0), 
  Ket(1, 0, 0, 0, 0, 0)))

gives

(1/2 Bra(0, 1, 0, 0, 0, 0) + 2 Bra(1, 0, 0, 0, 0, 0))(CenterDot)Ket(
 1, 0, 0, 0, 0, 0)

If anyone has ideas on how to make this CenterDot distributive on any linear combinations of bras and kets, please share.

Cheers

real analysis – Can we define a metric on the Hilbert cube using any lp metric?

This is the definition given for the Hilbert Cube:
enter image description here

The metric defined here is basically just the L2 metric. I was wondering if we can use any of the Lp metric instead as long as p>1. It can be shown using the comparison test that they all converge, so any of those metric should also be a well-defined metric correct?