How to automatically clean up / remove / garbage collect helper/builder images in multistage Docker builds

I’m new to Docker and trying to wrap my head around it thoroughly. I have an app using a multistage build, which may get published frequently, and I do not want to end up using too much drive space, or have to remember to come back to clean things up.

There is a similar question here, but there is no feedback regarding a convenient automated cleanup mechanism:

The builder image is created with a different ID each time, but no tag or name. I can call docker image prune, and it wipes them all out, which is nice, but I’m not the only one using this server, so that could be dangerous.

My multistage build looks like this:

# builder
FROM node:14.17.0-alpine as build
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
RUN npm install react-scripts@4.0.3 -g
COPY . ./
RUN npm run build

# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
# new
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
CMD ("nginx", "-g", "daemon off;")

I can tag the production environment from the call to build this Dockerfile in a PowerShell script:

docker build -f ./ -t sbc.hackermon:$($DateTime) ./

(On a side note, this creates an image that I think is a replication every time, since though each one gets a new tag, their ID is always the same, every time I publish/build. If so, then there’s no reason for me to tag them distinctly, unless they do happen to change as some point, which makes me wonder, since I’m using ‘stable’ and not a version number, if they could change without any code changes on my end. But that’s a different question I may also ask about separately.)

Now the build/helper images get a new random ID every time, but get <none> as a tag and <none> as for their repo. Being random, I don’t have a direct way to reference the ID to remove them each time I build, so they just keep piling up.

Is there a direct way to tag these manually so I can know the ID, or is there a way to find out the ID after so I know which to remove? It does appear there is no way to tag from within the Dockerfile, so that’s not an option.

dnd 5e – How to improve passive character builds?

How can characters which have powerful passive/static abilities be improved to be more fun and dynamic during game play?

In this case, I have an 8th-level 5e Forge Cleric. AC 24, Fire/Poison/Cold resistance, DC 17 spell save; WIS 18, Warcaster. These are good attributes, to be sure, but in many gaming sessions, there is not a ton the character can do that is both helpful to the group and fun/creative for the player. The most useful actions are to keep Aura of Vitality or Spirit Guardians up, tossing in Healing Word or a bit of other damage here and there. Turn Undead is of course powerful, and this character revels in its invocation, but it’s not always applicable.

Once in awhile, spells like Heat Metal, Command and Water Walk are valuable, but these are situational, and of course require some foresight to have memorized. This is an 6-8 player group, where other characters have flying abilities, barbarian rages, invisibility, Misty Step, polymorph and other interesting ways to interact with the scenario. Others have super-useful spells such as illusions and Mage Hand.

What are some options to make this type of powerful cleric (or similar class) more dynamic and fun to play vs. more-or-less the same spell options in every situation?

ios – TestFlight: Why isn’t individual tester getting notifications when new TestFlight builds are pushed to store?

I have an app on TestFlight, and I added an individual tester to version 0.1.3. I then pushed versions 0.1.4 and 0.1.5, and although this tester automatically appears under “Individual testers” for these last two builds, they didn’t receive any notifications about them, so they still have only version 0.1.3. They have auto-update set to ON. Does anyone know how I can resolve this?

version control – How much should we archive for reproducible builds?

A few alternative twists on the question title to contextualize further:

  • What to archive of the “sources” for a given software build?
  • Should I include all transitive packages in my repository?
  • Is it OK to rely on the package manager to be able to reproduce a build at all?
  • Should I archive a ZIP file of my git repo release tag?
  • Should we archive the build tools?

Context: Building an application that will be installed on multiple users’ machines / devices.

OK, so here’s the problem:

Obviously, all of our source code lives in source control. This is NOT enough to build the software however.

When you want to create a binary build of the application, you need:

  • Install Visual Studio in the correct version (we automate this via Chocolatey).
  • (a) Check out the correct SCC “release tag”.
  • Run “the build script”:
    • (b.1) Run a nuget restore against our internal package server
    • (b.2) Fetch 3rd party sources that are not checked in the primary repo (think vcpckg or something similar)
    • (c) Build the actual software (call msbuild in our case)
    • (d) Package the created application binaries into something that can be passed downstream

Note: Normally all of the above, and more, is run in the automated CI System (using Jenkins here).

Some here think we should create and archive a “ZIP file” before step (c) so that we have a base for a “reproducible build” and can reproduce a given build on any dev machine without relying on our source code server and/or our package mgmt server — and more specifically, without relying on the scripted part(s) of steps (b.#) as these have to get all the settings for the infrastructure correct (server names etc.) which could change over time.

Some here think that’s a waste of time and space, as the whole build system is critical infra anyway, so having something that “works without it” doesn’t make sense.

Is there some accepted norm with regard to this?

open source – Nightly Builds Test against real database

Although Arseni Mourzenko’s answer covers a lot of the points, it’s important to make a distinction between different databases.

Tests, especially those that manipulate data, should probably be run against a database that is instantiated for those tests. Assuming that your pre-prod environment is also used by people to perform manual testing or demonstrations, you don’t know what the state of the database is. If you ensure that the database starts in a known state, you can make stronger assertions about what the end state will be. If the database isn’t in a known start at the start of the test, the test could erroneously fail or succeed based on the state of existing data or the assertions will need to account for unknown data in the database.

I do think that it makes sense to run tests against a real database on a regular basis, especially if there are stored procedures in the database that need to be tested in conjunction with the application code. However, this should be a database that you have good control over so you can assert a state at the start and end of tests.

❕NEWS – Man Builds Mining Rig in his BMW | Proxies-free

The world of gaming and mining saw something extraordinary ,when a man identified as Bryne built six high-end graphics cards into the trunk of his BMW i8. He did that so that he can mine even when driving .

This seems to be the first of it’s kind and has the challenge of having the trunk always open , less the Mining machine will overheat.

command line – How to install older builds of Oracle VIrtualBox VM on Lubuntu 20.04 (details provided)

Need to install older build of Oracle VIrtualBox VM – 5.2.40 or 5.2.XX.
These are the older builds.

On Oracle VIrtualBox, they told me this:
In a nutshell to install the newest version, but I can’t use it.
ON the links above you can, too, see what I did in order to try to install Oracle VIrtualBox VM on Lubuntu 20.04.
In addition to that, I also tried just to install the specified build by using this instruction from the official page

To install VirtualBox, do

sudo apt-get update sudo apt-get install virtualbox-6.1

Replace virtualbox-6.1 To install VirtualBox, do

sudo apt-get update sudo apt-get install virtualbox-6.1

Replace virtualbox-6.1 by virtualbox-6.0 or virtualbox-5.2 to install
the latest VirtualBox 6.0 or 5.2 build.

What to do when experiencing The following signatures were invalid:
BADSIG … when refreshing the packages from the repository?

What to do when experiencing The following signatures were invalid:
BADSIG … when refreshing the packages from the repository?

I ve tried to install it via qAPt Package Installer.

Best regards

adbd cannot run as root in production builds

I try to run adb root on my TV emulator image in my Windows 10 Pro terminal and I get a

adbd cannot run as root in production builds

I also applied solutions provided here, but following did not help:

  • enter image description here
  • Android emulator image has no Google API
  • Android emulator image has no Google Play
  • run adb shell and su –> /system/bin/sh: su: inaccessible or not found

bitcoin core – Reproducible Gitian Builds .. but not the same hash as and will let you download a tarball for bitcoin v0.20.1

When they do it


They also say:

reproducing a binary for yourself will provide you with the highest
level of assurance currently available

So, it seems like the current method to do this is the “gitian build” maneuver.

When I do it


Went through the gitian process with Ubuntu 20.04 and Debian 10.
Both produced the same incorrect? tarball, 2 different setups/ 2 Operating Systems, same tarball.

Both my assert files are uploaded here for Debian and Ubuntu.

As a pleb I can think 1 of 2 things..

Something is wrong/off with some of my packages / versions / env / OS / setup OR, All the verifiers are lying and bitcoin’s been compromised. Help me narrow down the possibilities.

I would like to go through this exercise properly and reproduce the same binaries.. but unsure of the best way forward.. what are the next steps to verifying the build for v0.20.1? I can’t seem to reproduce the tarball advertised on bitcoincore . org and bitcoin . org. I can however make a reproducible build which makes me question a lot of things.

Would love if someone could go through their process from scratch and see if they can produce the same tarballs seen in this repo by these people. /*/ bitcoin-core-linux-0.20-build.assert‘s.
And share their complete process with me. After reproducing myself I would be happy to make a current walkthrough on the method and update the Docs, which (links below) are dated and broken for nearly all methods.

NOTE: If this is indeed the defacto way to currently do reproducible builds.. it would be nice to have documentation that is precise and current. I understand Ginx is something being worked on.. but would be nice to have a current doc in the interim for people who wants to verify builds. I would also be very satisfied if someone pointed something I was missing.

reference links:

reference github issue