Basic interactive 2D puzzles for kids

I’m aiming to create some basic 2D puzzles for my 3 year old niece. For example jigsaw puzzles(moving pieces around), matching animals and their sounds, finding herself in customized “where’s Waldo” etc. Preferably playable on touch screen (android phone or tablet), she still has to learn mouse control! Basic background music, little bit animation(may be some physics), good/bright colors and thats about it.

I’ve looked around and most people seem to love Unity, but that looks like an overkill for my case. GDevelop looks about right but I don’t know what pros think about using HTML5. I looked at pygame, easy to use but google search tells me not everyone thinks it is a good choice.

I’m overwhelmed by the sheer number of choices. Please help me get started – I’m happy to code (not a constraint at all) and put in the hours to learn.

python – Interactive troubleshooting tool

I am looking to build a web app based troubleshooting tool for my work. I already have an excel version of the tool but I am looking to make it interactive.

Ideally the program would ask a series of questions before displaying the relevant troubleshooting info (including images).

I have so experience in python so was thinking of using Tkinter for this. Is this the best option?

Thanks for any help received

Super-resolution photos by interactive focus stacking and image stitching?

My own experience with doing similar things would suggests that you should take all the pictures you need to take in one go using a using a tripod (note that tripods are cheap). The workflow for the projects I’ve done looks as follows.

You take pictures with a tripod and remote control at the lowest ISO setting available. You should use manual focus and check for optimal focus by using the zoom features of the live view. The choice of the F-number is more complicated in case of super-resolution. Normally you could safely take pictures at F/6, this isn’t so high as to give unshaprness due to diffraction while high enough to have a decent DoF and reduce the effects of lens imperfections. However, you will stray well within the diffraction limit with F/6 as the effective pixels you will be working with will be much closer to each other than the physical pixels on the image sensor. So, you should take the F-number lower, it then depends on the quality of the lens how low you should take it. You will still end up with unsharpness due to diffraction, but the less unsharpness you get, the less image quality will be lost when correcting for it.

Then you put in an empty memory card of 32 GB or more in your camera to take the pictures (otherwise you would have to change the memory card or download the data to the computer and make it empty too frequently). For each fixed setting take at least 25 pictures. Change the exposure and then the focus. And then you point the tripod to another part of the object, making sure there is a reasonable amount of overlap to do the stitching later.

The post processing work flow looks as follows. Let’s first forget about super-resolution and just consider focus stacking + HDR. Using your raw processor you convert the raw files to TIFF, you should turn off noise reduction. Then you use the align_image_stack program of the Hugin panorama stitcher to create aligned tiff files for each of the set of pictures that were taken at the same settings. Even if you take pictures on a tripod, there will still be shifts, typically a fraction a pixel, but even such small shifts must be eliminated.

There are different choices you can make for the options you must specify to run the align_image_stack program. I typically use the following command:

align_image_stack -a al -C -t 0.3 -c 20 im1.tif im2.tif im3.tif....

The -a al argument tells the program that all the remapped files will get a prefix al with a number attached to it. The -C argument will crop all the remapped images to the same size. The -t 0.3 option tells the program that the control points on the pictures that it attempts to match to each other must be within 0.3 pixels. The -c 20 option specifies the number of control points to be 20. The order you type the file names is important in general, but not in this case. The program will align the images in the order you type. This seems irrelevant, but in case of alignment of pictures with different exposures you want top put the images with small differences in exposure next to each other. In this case this doesn’t matter.

You then average over each such set to eliminate the noise, I use ImageMagick for that. You put all the files you need to average over in one directory. The command is then of the form:

convert *.tif -poly "w,1,w,1,w,1,w,1..." av.tif

You take w = 1/number of pictures, the argument 1 is the power, and we don’t take a power, so this is taken equal to 1 and you need to give the weight and power for each picture, the output ends up in av.tif.

Then with all the averages taken at different focus settings, you can do a focus stacking. You then have to align all these averages. You should first crop all the averages to the same size, and then use the command:

align_image_stack -a al -m -z -t 0.3 -c 20 im1.tif im2.tif im3.tif...

In this case you don’t use the -C to crop, the options -m and -z need to be used to maximize the field of view and to correct for the magnification of the individual images due to the different focus settings. Then using the enfuse program of the Hugin panorama stitcher, you combine the remapped images to compile the image with an extended DoF using the command:

enfuse --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --hard-mask *.tif

You repeat this for all the different exposures and the pictures of the different parts of the parts of the picture. You then combine the different exposures of the same parts together by first aligning them and then running enfuse with the default settings (the command is then simply enfuse *.tif). You then have HDR pictures with enhanced DoF for each part. You combine the pictures for the different parts using the Hugin panorama stitcher program.

The work flow to get super-resolution requires you to split the images taken with the same settings in groups that are shifted in alignment modulo 1 pixel by less than the desired resolution in either direction. So, if we want to double the resolution, then we group the pictures by looking if the shift in alignment is closer to a half integer or integer in the x and y direction. We then get 4 groups of pictures, each of which is processed as above, except for HDR processing. The alignment of the averages of the pictures in the different groups must then be calculated precisely (I use the ImageJ program for that) and then via interpolation you shift them to the desired values. You then combine them to a picture with 4 times the resolution. A complication here is that the shifts are typically not uniform enough for them to fit in any particular group (I usually only use super-resolution to take pictures of small objects like the Moon that are no more than about 100 pixels across). What you then need to do is to cut the pciture into small parts and treat them separately.

You’ll then see that the combined super-resolution pictures will be unsharp. Using deconvolution you can sharpen the images. This requires transforming to linear color space, estimating the point spread function, running a deconvolution algorithm and then transforming back to sRGB. You can then combine the different exposures together using enfuse.

Note that instead of running Enfuse to do HDR, you can let Hugin do that in one go when it stitches the panorama. But you can then end up with a large number of pictures that Hugin has to processes. In principle, this should lead to a better result as Hugin transforms to linear colorspace to do the processing, and that part of the computations isn’t going to be accurate when you feed it HDR processed pictures. This doesn’t affect the alignment of the panorama, only the final HDR output. But I’ve never seen any significant difference between the two methods.

SharePoint Online and adding an interactive page (Javascript)

I haven’t developed in SharePoint for a while and trying to gain some knowledge on how to create a interactive SharePoint Online page – either classic or modern.

The requirement is to display/edit a simple page with some items from lists/libraries. So I thought of a web part page. Once solution is created, I want to template solution and re-create another site(s).

The solution needs to limit certain lists / libraries to 1-3 items.
i.e. a user can only create one item in a certain list otherwise a warning is displayed.

I would have expected JavaScript to allow this validation, but it seems MS are not allowing JS any longer and instead are advocating SPFx? (I have tried SPD and I cannot seem to add script editor).

Is there a simple way to achieve my requirement without building a full SPFx solution I was thinking if the bulk of the work can be done in the UI and maybe add a widget validation script?

plugins – Interactive Web App in WordPress?

I am very new to WordPress and have just created my first website. I have created a machine learning model in Python that predicts a given output based on the user’s inputs, and I would like to make an interactive web app on my WordPress site where the user can enter these inputs, the model will make its prediction, and then output it to the user.

I was wondering what the best method / tools would be to create such a web application. Is this something that is possible in WordPress or would it be best to do it another way?

Thanks!

r – Arrange kable interactive table by clicking on the column names

I’d like to be able to arrange a table by clicking the column name, like I can with View(.) in R

This is my code

library(kable)
library(kableExtra)

titanic %>% 
  kbl() %>%
  kable_styling(bootstrap_options = c("striped", "hover"),
                fixed_thead = T) %>% 
  scroll_box(width = "1000px", height = "1000px")

I click the column names now and it doesn’t re order the columns and that’s what I’d like to change

xterm – Opening a terminal, running a command, and leaving an interactive terminal open

I don’t have high hopes for this, but what I’m trying to do is create a Desktop shortcut that, when clicked, opens a terminal (preferably mate-terminal, but I’m not picky), runs a command, and leaves the terminal open in an interactive session. Ideally, I would be able to arrow up in the new window to rerun the command.

Things I have tried:

  • mate-terminal --command "bash -c 'ls;$SHELL'": I cannot see what command was run and I cannot arrow up to rerun the command.
  • Changing the MATE terminal preferences to Keep Open: It keeps the window open, but it’s not interactive. All I can do is close it.
  • xterm -hold -e 'bash -c "ls"&&$SHELL': I cannot see what command was run and I cannot arrow up to rerun the command.

What I’m trying to do is create a shortcut for a command-line application that opens a terminal and runs whatevertool -h and then just waits for the user to do what they need to do. Is what I want possible?

usability – Interactive maps using d3?

I’m a freshman interaction design student in a Web Development I class. I have a pretty strong understanding of HTML& CSS already, but I have never used JavaScript. My professor asked me to lead a special web design project that will involve creating interactive maps to render economic related data.

Here are some requirements for the project:

•   The local map (comprised of 2 counties) will include a visualization of: foreign direct investments per county (per year), exports by industry per county (per year)

•   The world map will  1 map will provide a visualization of: export value to foriegn nations, import value from foriegn nations, service export value to foriegn nations , (all per year)

•   Each map will pull from a JSON database that will be compiled for me.

•   As the user mouses over each county or country, a popover including the $ value should appear.

•    I need create the SVG containers and properly position the data within the containers. 

•   I would like to add drop down or radio options for each map that will includethe ability to switch between the sets of data. such as: years, exports, foriegn direct investments, etc.  

•   I also believe I should create a drop down or some type of list allowing the user to choose the specific country or industry they wish to find without having to click on the map. 

•   I would like to color the areas of the map according to each value, for example: light blue for $100M - dark blue for $100B. 

I have been reading about JQuery on W3 Schools, I have watched multiple YouTube videos on how to use D3, and I tried it myself using some examples on GitHub. I have not received the data yet, but I do want to understand what my limitations are before I start the design for the rest of the website, so I will be trying out a csv file from the census to start learning how to position the elements within the container.

My question is, does anyone have experience creating similar maps? I have read through other threads here, and it’s given me a better understanding. However, I still feel a little lost. Does anyone know of any resources I could use to learn more?

I know this question is a bit vague, so I understand if there isn’t much to say. But really, I am just relieved that there is a place for me to even ask it! So, thank you for your time!

programming practices – Building an interactive futuristic like interface for wide touch screen

I’m a new hunter looking to build a interactive futuristic like interface. It will be a wide touch screen with touchable elements and background animation just like a futuristic screen. Just like background animations in videos here and here . I don’t want it to be one time setup for my desktop, but a project where I can add sub screen, images etc. How do I get started building this project? What resources, software, tools etc. should I utilize to implement this? Which programming language is required to build such interface.

java – Interactive Mandelbrot set pictures

The purpose of this project is to generate an interactive Mandelbrot set. The user can specify the degree of magnification from the command-line and click on the produced picture to magnify the picture at that point.

Here is a data-type implementation for Complex numbers:

public class Complex
{
    private final double re;
    private final double im;

    public Complex(double re, double im)
    { 
        this.re = re; 
        this.im = im; 
    }
    public double re() 
    { 
        return re; 
    }
    public double im()
    { 
        return im; 
    }
    public double abs()
    { 
        return Math.sqrt(re*re + im*im); 
    }
    public Complex plus(Complex b)
    {
        double real = re + b.re;
        double imag = im + b.im;
        return new Complex(real, imag);
    }
    public Complex times(Complex b)
    {
        double real = re*b.re - im*b.im;
        double imag = re*b.im + im*b.re;
        return new Complex(real, imag);
    }
    public Complex divide(Complex b)
    {
        double real = (re*b.re + im*b.im) / (b.re*b.re + b.im*b.im);
        double imag = (im*b.re - re*b.im) / (b.re*b.re + b.im*b.im);
        return new Complex(real, imag);
    }
    public boolean equals(Complex b)
    {
        if (re == b.re && im == b.im) return true;
        else                          return false;
    }
    public Complex conjugate() 
    {
        return new Complex(re, -1.0*im);
    }
    public String toString()
    {
        return re + " + " + im + "i";
    }
}

Here is my program:

import java.awt.Color;

public class InteractiveMandelbrot {
    private static int checkDegreeOfDivergence(Complex c) {
        Complex nextRecurrence = c;
        
        for (int i = 0; i < 255; i++) {
            if (nextRecurrence.abs() >= 2) return i;
            nextRecurrence = nextRecurrence.times(nextRecurrence).plus(c);
        }
        return 255;
    }
    private static Color() createRandomColors() {
        Color() colors = new Color(256);
        double r = Math.random();
        int red = 0, green = 0, blue = 0;
        
        for (int i = 0; i < 256; i++) {
            red = 13*(256-i) % 256;
            green = 7*(256-i) % 256;
            blue = 11*(256-i) % 256;
            colors(i) = new Color(red,green,blue);
        }
        return colors;
    }
    private static void drawMandelbrot(double x, double y, double zoom) {
        StdDraw.enableDoubleBuffering();
        
        Color() colors = createRandomColors();
        
        int resolution = 1000;
        int low = -resolution / 2;
        int high = resolution / 2;
        double xLowScale = x + zoom*(1.0 * low / resolution);
        double xHighScale = x + zoom*(1.0 * high / resolution);
        double yLowScale = y + zoom*(1.0 * low / resolution);
        double yHighScale = y + zoom*(1.0 * high / resolution);
        StdDraw.setXscale(xLowScale, xHighScale);
        StdDraw.setYscale(yLowScale, yHighScale);

        for (int i = low; i < high; i++) {
            for (int j = low; j < high; j++) {
                double realPart = zoom*(1.0 * i / resolution) + x;
                double imaginaryPart = zoom*(1.0 * j / resolution) + y;
                Complex c = new Complex(realPart,imaginaryPart);
                int degreeOfDivergence = checkDegreeOfDivergence(c);
                Color color = colors(degreeOfDivergence);

                StdDraw.setPenColor(color);
                double radius = 1.0 / (resolution * 2 / zoom);
                StdDraw.filledSquare(realPart, imaginaryPart, radius);
            }
        }
        StdDraw.show();
    }
    public static void main(String() args) {
        double x = Double.parseDouble(args(0));
        double y = Double.parseDouble(args(1));
        int magnifier = Integer.parseInt(args(2));
        double zoom = 1;

        drawMandelbrot(x, y, zoom);

        while (true) {
            if (StdDraw.isMousePressed()) {
                x = StdDraw.mouseX();
                y = StdDraw.mouseY();
                zoom = zoom/magnifier;

                drawMandelbrot(x, y, zoom);
            }
        }
    }
}

StdDraw is a simple API written by the authors of the book Computer Science An Interdisciplinary Approach. I checked my program and it works. Here is one instance of it.

Input: 0.5 0.5 10

Output (actually a succession of outputs):

enter image description here

enter image description here

enter image description here

enter image description here

I used paint to show where I clicked.

From my own previous posts, I already know how to improve Complex. In this post, I am solely interested in the improvement of InteractiveMandelbrot. Is there any way that I can improve my program?

Thanks for your attention.