graphics – Computing density of particles and distances from microscopic images

I am analysing microscopic particles images of the kind shown below, and primarily I am interested in learning whether in Mathematica one can extract structural properties of the particles using the built-in image analysis tools.

More precisely,

  • is it for instance possible to compute the nearest-neighbour distance distribution?

  • Or more importantly, measuring the density of particles from the image? (that is, number of particles per area when binning the image).

I admit there might be similar questions previously asked but I could not pinpoint one that tackled such problem, so any help would be much appreciated.

Image example (source):

enter image description here

The kind of analysis I have learned so far is to use the ridge lines separating the particles in order to detect them and find their centres of mass based on the point of maximum distance to a ridge line (per particle). Here’s an example

img = Import("")
ridgelines = RidgeFilter(-img, 4);
distanceRidges = 
    ridgelines) (*distance transform image based on the ridge filter*)

distMaximum = 
  4) (*find centre of masses using max ridge dists*)

which yields:

enter image description here
enter image description here
enter image description here

python – Microservice for scraping images with celery

I made a project, that scrapes images asynchronously and saves them in container. I have access to them through volume. Scrapy finds images on given web page.

Any tips will be good. But first I would like to focus on docker-compose I would appreciate tips on how to improve it.

files tree

My project looks like that. for celery config, scrap_images directory for scrapy project. finals images are stored in pictures in one possible directory

├── docker-compose.yaml
├── image_collector
│   ├──
│   ├── ENV_FILE
│   ├── image_collector   # overlaping
│   ├── pictures
│   │   ├── big
│   │   ├── medium
│   │   ├── small
│   │   └── tiny
│   ├── pipeline.log     # scrapy pipeline log
│   ├── scrap_images
│   │   ├──
│   │   ├──
│   │   ├──
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └── spiders
│   │       ├──
│   │       └──
│   ├── scrapy.cfg
│   ├── spider.log   # scrapy spider log
│   └──
├── requirements.txt
└── Worker


I had problem with path, so in docker-compose in Worker I got 2 volumes that overlap to start celery and scrapy from same workdir. I start celery with docker command, and withing same workdir i start scrapy spiders when task is received

version: '3.3'

      context: .
      dockerfile: Worker
    image: collector_worker:3.8.5
      - HOSTNAME=broker
      - PORT=5672
      - DB_ACCES_NAME=postgres_db
      - ./image_collector/ENV_FILE
      - ./image_collector:/app/image_collector
      - ./image_collector:/app
      - broker
    restart: always
    command: bash -c "mkdir -p pictures && chmod -R 777 pictures && celery -A image_collector worker --loglevel=info --autoscale=5,1"

    image: rabbitmq:3.6.6-management
    hostname: broker
    restart: always
      - "5673:5672"
      - "15673:15672"


Container that runs celery, and starts spiders in current workdir position

FROM python:3.8.5


COPY requirements.txt /app/requirements.txt


RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen

RUN pip install -r requirements.txt

Celery configuration to connect with rabbit broker

from __future__ import absolute_import, unicode_literals
from celery import Celery
import os

user = os.getenv('LOGIN', 'admin')
password = os.getenv('PASSWORD', 'mypass')
hostname = os.getenv('HOSTNAME', 'localhost')
port = os.getenv('PORT', '5673')

broker_url = f'amqp://{user}:{password}@{hostname}:{port}'
app = Celery("tasks", broker=broker_url, namespace="image_celery", include=('image_collector.tasks'))

__all__ = ("app",)

I start spiders with system command and pass url to scrap that page and process in pipeline

#from scrapy.crawler import CrawlerProcess
#from .scrap_images.spiders.image_spider import ImageSpider
#from scrapy.cmdline import execute
from .celery import app

import sys
import os

def start_spider(url):
    """Task for crawling web"""
    os.system(f"python -m scrapy crawl image_spider -a url={url}")

if __name__ == "__main__":
    url = r''

i send tasks to celery manuall at this moment, nothing special

from image_collector.tasks import start_spider

for pages in range(1000):
    url = f',0,0,3{2216 - pages:>04}.html'

import scrapy
from ..items import ImageItem
from ..custom_loger import define_logger

class ImageSpider(scrapy.Spider):
    name = 'image_spider'
    my_logger = define_logger("spider")

    # allowed_domains = ('')

    def start_requests(self):
        url = getattr(self, "url")

        if url:
            self.my_logger.debug(f"Starting image parsing at: {url}")
            yield scrapy.Request(url=url, callback=self.find_images)
            self.my_logger.error(f"URL is empty")
            yield None

    def parse(self, response):
        print(f"Parsing: {self.url}")

    def find_images(self, response):
        images = response.css("img::attr(src)").extract()
        item = ImageItem()
        item('image_urls') = ()
        item('title') = response.css("title::text").extract()(0)
        item('title') = response.xpath("//title/text()").extract()(0)"Found :{len(images)} images")
        for img in images:
            img = str(img)
            im_url = img if img.startswith("http") else "http:" + img

            self.my_logger.debug(f"Got image: {im_url}")

        yield item

    # def save_images(self, response):
    #     image = ImageItem()
    #"Got image: {response.url}")

    def err_hanlder(self, failure):
        url = failure.request.url
        callback = failure.request.callback
        errback = failure.request.errback  # should work same way as callback... ?
        # status = failure.value.response.status
        self.my_logger.error(f"Fail request: @: {url}")

media – Download all images from one single post

I’ve used toolset custom fields and custom post types to create a post type with an image field that can have multiple images uploaded to a post.

Is there a way to generate a “download all images” link to download all images from a single post on the edit page?

This way a website Admin, when viewing the edit page of a post, can download all the images for one single post with one click and doesn’t have to save all the images one by one.

Upload multiple Standolane Media Entities (Images) without Reference Field

The Media Bulk Upload module will allow you to upload many images at once.

This is the Drupal 8 module to bulk upload files and create the media
entities automatically for them. It uses DropzoneJS
to quickly upload multiple files.

The bulk uploader will allow you to edit fields on the media item before saving. You’ll want to set alt text for each image, and your Taxonomy Term reference.

I will find images for your website social media or online store for $10

I will find images for your website social media or online store

If you are looking for good photos then this is the right gig for you..

My gig 24 photos include :

  • Cars
  • Home
  • Clothes
  • Luxury items
  • Models
  • People
  • Utensils
  • Decor
  • Movie theater
  • Trending
  • Pets


And many other photos your are looking for.

Inbox me first before placing the order


javascript – Correct way to delete posts and unlink images using NODE JS

I have the following tables in MySQL:

create table PostSchema
(id int not null primary key auto_increment,
categories int not null,
title varchar(50),
status varchar(20),
comments bool,
body varchar(255),
filename varchar(255),

create table commentSchema (
id int not null primary key auto_increment,
userID int,
message varchar(255),
image varchar(255),
postID int);


When a post is created it saves an image in '/uploads' and filename is referenced in PostSchema.

Also, when a comment is created with an image it is also saved in '/uploads' and filename is referenced in commentSchema.

My Idea

I have a router set up to delete the records from PostSchema and any comments are relevant in that PostSchema. On top of that delete any images referenced for both tables.

I have a working code, just want to hear some reviews if there is anywhere I can improve 🙂

Current working code:

router.delete('/:id', (req, response)=>{
  // Execute query for deleting the post

    // Get filename from database
    db.query("SELECT filename FROM PostSchema WHERE id = ?",,(err,fileResult)=>{
      // If error
        return res.status(500).end(err.message);
      // If not an error continue...

        try {

          // Unlink comments images
          db.query("select image from commentschema where postID = ?",, (cErr, cRes)=>{
              return res.status(500).end(err.message);
              if(fs.existsSync(uploadDir + cRes(0).image)){
                fs.unlink(uploadDir + cRes(0).image, (commentFile) => {
                  if (commentFile) throw err;
          // Unlink PostSchema images
          fs.unlink(uploadDir + fileResult(0).filename, (errs)=>{
            // Delete PostSchema + Delete comments
            db.query("delete t1, t2 from postschema t1 join commentschema t2 on = t2.postid where = ?;",,(errors, results)=>{
                db.query("delete from postschema where id = ?",, (er, res)=>{
                    // Redirect to all posts page
                    req.flash('success_message', 'Post was succesfully deleted');
                    response.redirect(303, '/admin/posts');
      catch (e) {
        return res.status(500).end(e.message);

Which poll / survey app has ALL of the following options: images, timer, long text descriptions, mc questions

I am not sure whether it is the correct place to ask this question.

I am looking for a poll app or plugin which has all of the following options:

  • set a time limit for each question
  • include images
  • add descriptive text in addition to the question of approx. 500 characters
  • multiple choice questions
  • if possible: real-time evaluation with plots

I had a look at quiz / poll app’s which are meant to be used in classroom’s or to get feedback from the audience:

And I also tried several survey app’s:

  • SurveyMonkey
  • Crowd Signal
  • Google Forms

None of them has all the options together. Has anyone a recommendation?

hooks – Optimizing images before saving

I have a Drupal 8 website with customer contributed content. The images uploaded by customers are not optimized, so I am using image optimization pipeline to take care of the responsive images. however, I found that the original image is uploaded without compression or optimization.

While fully knowing the intention of keeping the original image intact, I need to do optimization for images uploaded by users having role ‘xyz’ using the custom_entity_presave hook.

The following code segment, however, is not making any changes to the images, and all the images are stored in full size. i.e. $pipeline->applyToImage($uri); is not executed or it is not making any impact.

Can you please help me crack this problem?

PS: The image optimization pipeline “Local Binaries” is working well, as I can see the generated responsive images are optimized. Only the code segment below is not working!

if($entity->getEntityTypeId() == 'file') {
    if (
DrupaluserEntityUser::load(Drupal::currentUser()->id())->hasRole('xyz')) {
    $uri = $entity->getFileUri();
    $image = Drupal::service('image.factory')->get($uri);

    if ($image->isValid()) {
      $pipeline = Drupalimageapi_optimizeEntityImageAPIOptimizePipeline::load('local_binaries');
      if ($pipeline instanceof ImageAPIOptimizePipeline) {