differential equations – Asymptotic Output Tracking – Where to Place the Input Control Signal?

Asymptotic Output Tracking: Code Issues

I ask for help from specialists in differential equations, dynamical systems, optimal control and
general control theory;

I have the following system of differential equations:

begin{cases} frac{dx(t)}{dt}=G(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

where, $x,z,G,H,X$ – variables; $f=-(x(t)+alpha sin(omega t)-x_e)^2$; $alpha, omega$ – parameters.

As an output $y$, I assign:

$y=tanh(k cdot H(t))$

As an reference signal $r_1$, I assign:

$r_1=-1$

As an constant time $p_1$, I assign:

$p_1=-1$

Well, I tried to program this in the Mathematica program and ran into a difficulty that I can’t get over yet. Question: in which of the equations should the control signal $u(t)$ be placed?

I chose the first equation, then the original system of equations will look like this:

begin{cases} frac{dx(t)}{dt}=G(t)+u(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

(***)

Clear("Derivative")

ClearAll("Global`*")

Needs("Parallel`Developer`")

S(t) = (Alpha) Sin((Omega) t)

M(t) = 16/(Alpha)^2 (Sin((Omega) t) - 1/2)

f = -(x(t) + S(t) - xe)^2

Parallelize(
 asys = AffineStateSpaceModel({x'(t) == G(t) + u(t), 
     z'(t) + z(t) == D(f, t), G'(t) + G(t) == z(t) S(t), 
     H'(t) + H(t) == z(t) M(t), 
     1/k X'(t) + X(t) == D(x(t), t)}, {{x(t), xs}, {z(t), 0.1}, {G(t),
       0}, {H(t), 0}, {X(t), 0}}, {u(t)}, {Tanh(k H(t))}, t) // 
   Simplify)

pars1 = {Subscript(r, 1) -> -1, Subscript(p, 1) -> -1}

Parallelize(
 fb = AsymptoticOutputTracker(asys, {-1}, {-1, -1}) // Simplify)

pars = {xs = -1, xe = 1, (Alpha) = 0.3, (Omega) = 2 Pi*1/2/Pi, 
  k = 100, (Mu) = 1}

Parallelize(
 csys = SystemsModelStateFeedbackConnect(asys, fb) /. pars1 // 
    Simplify // Chop)

plots = {OutputResponse({csys}, {0, 0}, {t, 0, 1})}

At the end, I get an error.

At t == 0.005418556209176463`, step size is effectively zero; 
singularity or stiff system suspected

It seems to me that this is due to the fact that either in the system there is a ksk somewhere, or I have put the control input signal in the wrong equation. I need the support of a theorist who can help me choose the right sequence of actions to solve the problem.

I would be glad to any advice and help.

apparmor – HOW to customize docker container profile to implement fine-grained network access control

1.materials

apparmor policy reference https://gitlab.com/apparmor/apparmor/-/wikis/AppArmor_Core_Policy_Reference#AppArmor_globbing_syntax

2.my profile

#include <tunables/global>profile docker-test flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>
deny /data/** rwl,

deny /usr/bin/top mrwklx,

deny /usr/bin/hello mrwklx,

deny network,

file,

capability,

deny network inet tcp,

deny network bind inet tcp src 192.168.1.1:80 dst 170.1.1.0:80,
}

3.my error

syntax error, unexpected TOK_ID, expecting TOK_END_OF_RULE

the error comes from the last line which contains specific ip_addr, I test it on ubuntu18.04 and my kernel version is 5.4.0-42-generic, apparmor version is 3.0.1 which I compiled from source.

settings – Is it possible to get finer volume control on a rooted Lenovo M8 tab (HTC Sense UI)?

There’s a popular mod to get finer volume control on a device by adding the property ro.config.media_vol_steps to /system/etc/prop.default (android 10) and assigning your prefered steps value, ie ‘= 30‘ but on my device this results in multiple volume ranges divisible by 15.
Apparently this affects the HTC UI sense as well.

For example if I assign the value 37 to the above property (I do get 37 steps) but when I test by playing something on spotify and increase volume using the buttons I get:

step 0 = mute

step 15 = max volume

step 16 = mute

step 30 = max volume

step 31 = mute

step 37 = volume 6

Anything below 15 works as expected.
The same happens if you’ve rooted with Magisk and do it via prop editor or any other editor.

I’ve been trying to find a solution for this for months with no success hence up the creek with no paddle.

I have coding abilities but for me X86 assembly is easier than than understanding the various tool chains / libraries / compilers required to debug an Android device.

I would like to at least know where in the architecture I should be focusing on.

  • Is it vendor or system related?

  • A persistent setting?

  • Is the setting pre compiled so would need custom ROM ie a library file change (.so)
    I beleive it must be solvable just hoped I could get at least a hypothetical view of the problem from someone more familiar with the source of Android!

You can get fine volume control from the GUI (swipe down) how can this be mapped to Volume up / Volume down?

PS: I should mention I don’t have TWRP as it’s not available for this device.

I can now add that installing EdXposed framework along with GravityBox (version Q) also results in the same problem! The modded code above is not found in the prop.default file even tho I have the same broken functionality, same with magisk modules.

Seems Xposed / gravityBox / magisk are the most powerful tools available to developers for stock software. Im sure there are experts out there that could build a custom software with a simple volume control patch for this device so guess we’ll just have to wait. Thanks

permissions – Full control, Limited Access given through a group where the user does not exist?

I have a SharePoint site where, when searching for a user at site level (using check permissions) it comes back saying the user has “Full control, limited access – Given through the ‘Site Name’ Owners Group.

However, the user is not an Owner of the site and when I click into the Owners group, they are not listed in there either.

How is this user being provided with access at site level from the specified group when they don’t appear to be in that group and how would I go about visualising and removing these user permissions?

I should add that when I click the “There are limited access users on this site. Users may have limited access if an item or document under the site has been shared with them. Show users”. It doesn’t show any additional users, but instead replaces ‘Full Control’ with ‘Full Control, Limited Access’ against the Site Owners group.

gm.general mathematics – Control & Experimental Group Selection Methodology using STDEV and T-Test Methodology?

I would like to know if my methodology was ‘correct’:

I am trying to conduct an experiment on my stores.
I would like to find out the effect of a marketing campaign on the number of transactions.

Only about 20% of the stores are participating in the marketing campaign.

The original methodology was to use the entire 20% as the experimental group and the remaining 80% as the control group. Unfortunately, these two groups are incomparable in terms of number of transactions.
when plotted as box and whisker plots next to each other, their distributions are incomparable (mean, median, quartlies, min, max, etc).

So what I did was filter out the ‘outlier stores’ at each end until the box and whisker plots for each group were practically identical. I then ran a t-test on the filtered groups, we failed to rej the null (meaning that these groups are statistically the same prior to the promotion).

Now that we have 2 comparable groups for time -1, we run the promotion for a month.
after promotion month is over, we take the number of transactions from each group and run another t-test.
We Rejected the null in favor of the alternate Hypo, which is that these 2 groups are now statistically different with an alpha of 0.05.

My first question is: is this methodology okay ?

My second question is: alternatively from using box and whiskers and removing outliers until both groups’ descriptive stats are similar, can i use a normal distribution and STDEV to remove outliers and create comparable groups within my population ?
The box and whisker method worked to get a comparable groups as confirmed by the t-test, but is very manual. So i would like to create an automated method and was wondering if using a normal distribution and removing outliers by STDEV would be plausible ?

Sorry for the long read.
Thank you

Is it possible to get finer volume control on a rooted Lenovo M8 tab?

Theres a popular mod to get finer volume control on a device by adding the property ro.config.media_vol_steps to /system/etc/prop.default (android 10) and assigning your prefered steps value, ie ‘= 30’ but on my device this results in multiple volume ranges devisable by 15.
Apparently this affects the HTC UI sense as well.
For example if I assign the value 37 to the above property (I do get 37 steps) but when I test by playing something on spotify and increase volume using the buttons I get –

step 0 = mute
step 15 = max volume
step16 = mute
step 30 = max volume
step 31 = mute
step 37 = volume 6

Anything below 15 works as expected.
The same happens if you’ve rooted with Magisk and do it via prop editor or any other editor.
Iv been trying to find a solution for this for months with no success hence up the creek with no paddle.

I have coding abilities but for me X86 assembly is easier than than understanding the various tool chains / libraries / compilers required to debug an Android device.
Id like to at least know where in the architecture I should be focusing on.

Is it vendor or system related?
A persistant setting?
Is the setting pre compiled so would need custom ROM ie a library file change (.so)
I beleive it must be solvable just hoped I could get at least a hyperthetical view of the problem from someone more familiar with the sourcery of Android!
You can get fine volume control from the GUI (swipe down) how can this be mapped to Volume_key_up / volume_key_down ?
Thanks. ☮️

version control – What is the **difference** between git-repo and meta tools in the context of multiple git source code repositories?

Regarding the tools repo and meta, in the context of multiple git source code repositories, what is the difference ?

Or do they basically do the same thing?

An article back in 2012 says: “Repo is a tool created by Google (…) works by providing a way to check out multiple projects (Git repositories) (…) provides a way to submit an atomic changeset that includes changes to multiple different projects. (…)” and https://github.com/mateodelnorte/meta says: meta is a tool for managing multi-project systems and libraries. It answers the conundrum of choosing between a mono repo or many repos by saying “both”, with a meta repo! (…) letting you execute them against some or all of the repos in your solution at once

EDIT : My earlier attempt to ask this was deleted for being of type “find or recommend tools… “. Now I am not here now asking for a recommendation, I am just asking are there any concrete difference between them! Thanks!

gnome – Lost brightness control after plugging in USB headset

Recently I’ve spent a few nights to install and fine tune ubuntu 20.04 on my brand new laptop (Lenovo ThinkBook 14 G2). The result was awsome… just until today, when something bad has happened – the brightness controls won’t work anymore.
The possible root cause was plugging in a usb headset (Hama usb 300).

Symptoms:

  • Brightness up/down will mostly show the brightness-overlays with random brightness level values underneath, without actually controlling brightness. Sometimes it turns on/off microphone showing the microphone-overlays too. Depending on undisclosed reasons I’ve also witnessed brightness down – blank screen – brightness down – blank screen – … cor something similar) cycles too.
  • Brightness can still be controlled from command line via sudo brightnessctl -d 'amdgpu_bl0' set 20%, so not the functionality but some configuration got broken.

My best bet is that since the USB headset has buttons

Apr 12 13:31:39 gep2 /usr/lib/gdm3/gdm-x-session(1112): (II) event14 - USB PnP Sound Device USB PnP Sound Device: is tagged by udev as: Keyboard
Apr 12 13:31:39 gep2 /usr/lib/gdm3/gdm-x-session(1112): (II) event14 - USB PnP Sound Device USB PnP Sound Device: device is a keyboard

it is registered also as a keyboard, and so the settings of actual keyboard got scrambled.

At this point I’m out of ideas.

What could take me closer to the resolution would be knowing the mechanism that happens between pressing such a control button and seeing the on-screen overlay (including mappings and configuration).


So far is the meaningful part. Now some extract of my investigation, only for real enthusiasts. Warning: it won’t add much!

sudo showkey -k has shown codes for {volume mute, volume up, volume down, mic mute, brightness down, brightness up, calculator}, respectively as {113, 114, 115, 190, 224, 225, 140}. The physical keystrokes are Fn + {F1, F2, F3, F4, F5, F6, F12}

xmodmap -pke maps the above entities consistently to numbers greater by 8 than the ones provided by the showkey command. For example keycode 232 = XF86MonBrightnessDown NoSymbol XF86MonBrightnessDown NoSymbol XF86MonBrightnessDown. I think it is still okay.

xev -event keyboard | sed -Ene 's/.*keycodes*((0-9)*)s*(keysyms*w*,s*(w*)).*/keycode 1 (2)/' -e '/keycode/p' normally won’t display anything for these Fn-combos, but for brightnress controls it displays some garbage(?):

    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
keycode 156 (XF86Launch1)
keycode 156 (XF86Launch1)
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
keycode 156 (XF86Launch1)
keycode 156 (XF86Launch1)
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
    request MappingKeyboard, first_keycode 8, count 248
keycode 156 (XF86Launch1)
keycode 156 (XF86Launch1)

which corresponds with xmodmap output keycode 156 = XF86Launch1 NoSymbol XF86Launch1 NoSymbol XF86Launch1

acpi_listen will display stuff correlating with the above, I mean brightness control related lines differ a bit from the others:

button/mute MUTE 00000080 00000000 K
button/volumedown VOLDN 00000080 00000000 K
button/volumeup VOLUP 00000080 00000000 K
button/f20 F20 00000080 00000000 K
video/brightnessdown BRTDN 00000087 00000000
video/brightnessup BRTUP 00000086 00000000
button/f20 F20 00000080 00000000 K
(no entry for Fn-F12, e.g. calculator)

And yes, button/f20 stands for the mic mute, and randomly appears around the brightness control related actions.

A little addition: evtest shows that my keyboard is put together from two keyboards. See event3 and event7:

Available devices:
/dev/input/event0:  Lid Switch
/dev/input/event1:  Power Button
/dev/input/event2:  Power Button
/dev/input/event3:  AT Translated Set 2 keyboard
/dev/input/event4:  Video Bus
/dev/input/event5:  MSFT0002:00 04F3:3140 Mouse
/dev/input/event6:  MSFT0002:00 04F3:3140 Touchpad
/dev/input/event7:  Ideapad extra buttons
/dev/input/event8:  HD-Audio Generic HDMI/DP,pcm=3
/dev/input/event9:  HD-Audio Generic HDMI/DP,pcm=7
/dev/input/event10: HD-Audio Generic HDMI/DP,pcm=8
/dev/input/event11: HD-Audio Generic Mic
/dev/input/event12: HD-Audio Generic Headphone
/dev/input/event13: Integrated Camera: Integrated C

java – Should you use your own implementation of inversion of control instead of a dependency injection container?

Writing your custom implementation of dependency inversion is a great way to teach yourself the concepts. However, in a real world scenario, using a framework to take care of that for you will almost always be the better option. It’s almost, because as always, there are situations in which you will need to rely on your own implementation. E.g. when programming something for an embedded system which cannot accommodate the entire framework, due to the system’s HW restrictions.

There’s really nothing wrong with using Spring even for small projects, it being as popular as it is. It’s true that Spring is capable of a lot of stuff and it may seem like you’re bringing too much into the project right from the start. If that’s not your preference, you could grab a framework which deals solely with the IoC problem, e.g. Google’s Guice. However be advised, Spring, although heavy, prepares the ground for your project to seamlessly grow into something bigger, without you needing to change the underlying technology, as Spring has a lot to offer even for enterprise level applications.

power automate – How to design a flow which can control other independent flows without building child-parent relation

How can we in MS flow control the execution of other flows?
To be more specific:
How can we do the followings in a flow:

1 -Check the status of other flows? e,g, running, idle
2- If they are running, for how long they are running?
3- Turn the other flows on or off?
4-a bit greedy, but can we check the value of a variable in other flows if they are in a running state? (e.g. Can we check the variable 'var_a' of flow B which is in the running state while we are in flow A (no child-parent relation between these flows)?