Image processing – Individual test paper generation (and marking)

I would like to use mathematica to create individualized tests that students can print (handwritten). I also want to be able to generate individualized printed answers.

As a bonus, I want to be able to scan student responses and rate them. I understand that this would mean handwriting recognition, which is problematic, but I could make it easier by limiting the answers (eg, numbers) and getting examples of student handwriting.

The special case I imagine is to give the students the coordinates of 3 random integer points in the plane and let them continue to work out different triangle centers using coordinate geometry. The answers would be points in the plane as pairs of mixed frits or decimal numbers rounded accordingly. I would also have them portrayed on paper and with Geogebra or Desmos.

I could use merge in Excel and Word. There is also https://www.auto-multiple-choice.net in Latex. But I did not pursue these options.

Mathematica seems to have web versions of it available: https://community.wolfram.com/groups/-/m/t/803498

Test the Turing completeness by creating a compiler in a Turing Complete language

No. It is usually relatively easy to compile a language (Turing complete or not). At a very simple level, a compiler smoothes an abstract syntax tree into a list of statements. This can usually be done by a simple structural induction over the syntax tree. Optimization runs often perform data flow or control flow analyzes, which would be more difficult to visualize as a simple structural induction.

You can do a hands-on demonstration by programming a compiler for the untyped lambda calculus into a simple stack-based Turing complete machine in Agda or another language (not Turing's).

As an "extreme example", the "compiler" could be the identity function. If you thought you could write a compiler any However, this is not possible because the semantics of the language requires the execution of arbitrary code (eg, Common Lisp macros) to work out the syntax or otherwise produce a correct compiler.

Defense – Rules for productive data in test systems

When I look at best practices for using test systems, I think of the following two topics:

One of the best practices for having productive data in test systems is, for example: have a retention period until the actual deletion of this data and complete logging.

Again and again, I hear the argument that log files require a lot of storage capacity, which spend a fair amount of money to use for storage. Therefore, it has been proposed to activate this only for the critical actions, e.g. Deletion, exports. My question is, is not this still a problem because you do not have complete traceability of user actions from a risk perspective?

Deleting productive data in test systems:
I know that due to GDPR erase periods, especially in PII productive data in the test system plays an important role. But what is the risk if you do not have GDPR data in the test system and do not delete it regularly?

Do you help to write nuclear test?

We have https://www.drupal.org/files/issues/2019-04-12/adding-custom-access-2904546-24.patch posted in https://www.drupal.org/project/drupal/issues / 2904546 with the request "for a bump test with and without active view". The patch itself is very simple, so I would assume that the test would be very easy. I just have no experience in writing tests, so I do not really understand what I would test / claim here for, or how I should test both the existing and the nonexistent views.

javascript – To write a unit test case to verify that localstorage is empty after logging out of the web app

I try to clear my local storage when I log out of the application. I want to write a component test case in Jasmine to see if this task will be performed when the logoff function runs. I write test cases for the first time, so I got stuck in the beginning.

In my compoment.ts file I have a logout function:

Logout () {)
location.href = "/";
localstorage.clear ();
}

spec.ts file

beforeEach (function () {
var store = {};
spyOn (localStorage, & # 39; getItem & # 39;). and CallFake (function (key) {
return null;
});
});

I do not know if this is a correct approach to writing the test case for that particular request, or which of the unit or integration test cases actually applies to that situation.

Linux – To test LDAP authentication from a specific LDAP master on a Solaris client

I have OpenDS LDAP on Linux, which consists of two nodes, sea-ldap.xx.yy.ss and phx-ldap.xx.yy.ss. There are Solaris clients where authentication from these servers depends on the Seattle or Phoenix client

For Seattle customers -> uri ldaps: //sea-ldap.xx.yy.ss/ ldaps: //phx-ldap.xx.yy.ss/

For Phoenix customers -> uri ldaps: //phx-ldap.xx.yy.ss/ ldaps: //sea-ldap.xx.yy.ss/

We did a switch upgrade so all the connectivity comes from the Seattle side. I want to check all servers to see whether they can authenticate to Phoenix, regardless of their location.

I can run this command on an OpenLDAP Linux client – ldapsearch -D cn = ldapadm, dc = xx, dc = yy, dc = xx -W -H ldaps: //phx-ldap.xx.yy.ss/ – b "dc = xx, dc = yy, dc = yy," cn = johnp "

I do not have any lddapsearch on the Solaris 10 client. How can I check it differently? With ldap I only have ldapaddent, ldapclient & ldaplist

Many Thanks