java – How best to structure my Service/Repository layers when persisting a Many to One object?

I’m working on a project where I need to do CRUD operations on Book and Library objects. Naturally the relationship between Book and Library is Many to One, like so:

@Entity
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class LibraryDao {

    @Id
    @GeneratedValue
    Long id;

    @OneToMany(mappedBy = "library")
    List<BookDao> book = new ArrayList<>();

} 
@Entity
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class BookDao {
    
    @Id
    @GeneratedValue
    private long id;

    @ManyToOne(fetch = FetchType.LAZY)
    private LibraryDao library;

    String name;

    public BookDao(Library library, String name) {
        this.library = library;
        this.name = name;
    }
    
}

I’m trying to keep these as separate as possible in each layer which is where my question comes from. It’s easy to keep the creation and getting of Library stuff separate, but I’m finding it hard to create persist Books without mixing the two up. Mainly because I need a LibraryDAO object to create a BookDAO. Please see my comment in the BookServiceImpl class.

@Service
public class LibraryServiceImpl implements LibraryService {

    @Autowired
    LibraryRepository libraryRepository;

    @Override
    public Library getLibrary(long libraryId) {
        LibraryDao libraryDao = libraryRepository.findById(libraryId).orElseThrow(LibraryNotFoundException::new);
        return new Library(libraryDao.getId(), libraryDao.getBooks().stream().map(book -> new Book(book.getId(), book.getName())).collect(Collectors.toList()));
    }

    @Override
    public Library createLibrary() {
        LibraryDao libraryDao = libraryRepository.save(new LibraryDao());
        return new Library(libraryDao.getId(), List.of());
    }

}
@Service
public class BookServiceImpl implements BookService {

    @Autowired
    BookRepository bookRepository;

    @Autowired
    LibraryService libraryService;
    

    @Override
    public Book createBook(Long libraryId, String name) {

        //Which one below is less destructive to the MVC pattern? Either I can 
        //have a LibraryRepository in my BookService class (this feels wrong)
        //or else make libraryService.getLibrary() return a DAO object? Isn't is expected
        //that DAO objects stay in their service class?
        LibraryDao libraryDao = libraryService.getLibrary(libraryId);
        LibraryDao libraryDao = libraryRepository.findById(libraryId);

        BookDao bookDao = new BookDao(libraryDao, name);
        BookingDao saved = bookRepository.save(bookDao);

        return new Book(saved.getId(), saved.getLibraryDao().getId(), saved.getName());
     }

tensorflow – How does Keras determine the batch_size automatically in layers (i.e. Conv2D)?

I created a custom Keras layer class that takes in a tensor and returns a tensor (it’s used as an intermediate layer in the middle of my model). In trying to fix shaping errors, I have been trying to look at Keras docs and other StackOverflow errors such as Keras LSTM input dimension setting and Custom Layer in Keras – Dimension Problem.

My question is how does Keras automatically regulate the batch_size and in what step does it adjust it?

For further clarification, I mean how does it populate the None value in (None, 1, 128, 1) gained from using model.summary(), for example.

P.S. sorry if the formatting or explanation is not clear, it’s my first StackOverflow question. Thank you for the help! 🙂

photo editing – How to move multiple layers to another tab in Photoshop

I usually don’t work with photography, so I apologize if my question sounds stupid, but I am doing a small project, and I need help with Photoshop and layers.

I have one image with multiple different layers and groups. I would like to save everything as one and move it to a new tab in the same Photoshop window. Is it possible to use the project with layers in the second tab as a whole, but if I modify something on the original tab, that the copied “image” on the second tab is automatically updated, so I don’t need to save the first image every time and manually import it to the second one? Or is there any other solution for this?

photo editing – How to move mutluple layers to another tab in Photoshop

I usually don’t work with photography, so I apologize if my question sounds stupid, but I am doing a small project, and I need help with Photoshop and layers.

I have one image with multiple different layers and groups. I would like to save everything as one and move it to a new tab in the same Photoshop window. Is it possible to use the project with layers in the second tab as a whole, but if I modify something on the original tab, that the copied “image” on the second tab is automatically updated, so I don’t need to save the first image every time and manually import it to the second one? Or is there any other solution for this?

Thanks in advance

clothing – Is a leather jacket + layers good enough for an European winter? (Back packing)

It should definitely be enough but you should take the warm jacket with you just in case.

But you can judge yourself. Temperature Averages in Paris in November are about 7°C (41°F) and it can be quite rainy (15 rainfall days in december as seen on holiday-weather.com). A rainjacket is advised, but depends on your personal preference. If you don’t mind holding your umbrella, then of course you don’t need a rain jacket.

In the other places the temperature will probably be similar or a bit higher (up to a 10°C/50°F average in sicily’s december.) with similar rainfall days.

Keep in mind that these are the average temperatures and it can get colder than that.

I live in Switzerland, where it can get quite cool in November/December and on a cold day, a tshirt, hoodie and warm jacket keep me warm. But then again, you are probably used to warmer temperatures, so a layer more won’t hurt.
If you can easily take the warm jacket, then do it, because it does get chilly, especially if you plan on visiting “mountain-y areas”.

Better be safe than sorry, so pack the warm jacket also and test whether or not you need it.

Have a good and warm time in Europe.

machine learning – How to assemble layers to perform different linear transformations on each column of matrix?

I have a matrix $A_{m×n}$ and I want to assemble a network that performs $n$ different trainable linear transformations $T_{p×m}(i), iin {1,2,…n}$ on its corresponding columns, (i.e. $B^i = T(i)A^i)$. However, the LinearLayer() in mathematica will apply to all the columns. It’s possible to use n PartLayer() to extract every column in $A$, but that may not work well when $n$ is big. Is there any easy way to implement this?

My solution looks like this:enter image description here
First time using StackExchange, thank you so much 🙂

shaders – (Unity hdrp) Internal mesh visibility: Single sided (backface) emission, or render layers, or …?

So my problem is a bit hard to explain. I have a character made out of a jello shell, with an internal skeleton. So let me introduce you to ma boi “Chonker McJello”

Chonker 1

Chonker 2

The problem is, that I want a rich green slightly transparent jello, without loosing too much details and “whiteness” on the internal skeleton. If I turn the transparency or saturation of the jello down, the jello looks ugly, if I put the jello as I want, the Skeleton looses too much details/turns green. I have thought of different approaches, but do not really know how to achieve the effect I want.

  1. Single sided emission:
    Is there a way to make the jello glow internally and light the skeleton up? The emission should only come from the backfaces.

  2. Doing something with the render layers, like rendering the skeleton with transparency on top of the jello again, to make it appear “less affected” by it’s color

Everything I have in mind sounds easy but is propably pretty complicated and I don’t have a lot of experience with shadergraphs and unity’s hdrp. If you have any idea how to achieve the desired effect (maybe not in the way I thought) I would really appreciate your help!

Thanks a lot!

How can I simulate a bayer filter (or just RGB channels) using photoshop layers?

How can I essentially combine pure red, green, and blue info in 3 layers to create full color?

You have to start with pure ‘Red’, pure ‘Green’, and pure ‘Blue’ color information. But that’s not what you can get from a Bayer masked sensor, since the actual colors of each set of filters are not ‘Red’, ‘Green”, and ‘Blue’.

It’s not what we get from the cones in our retinas, either.

Keep in mind that there’s no specific color intrinsic in any wavelength of visible light, or other wavelengths of electromagnetic radiation for that matter. The color we see in a light source at a specific wavelength is a product of our perception of it, not of the light source itself. A different species may well not perceive wavelengths included in the human defined visible spectrum, just as many species of bugs and insects can perceive light in near infrared wavelengths that do not produce a chemical response in human retinas.

Color is a construct of of how our eye-brain system perceives electromagnetic radiation at certain wavelengths.

Our Bayer masks mimic our retinal cones far more than they mimic our RGB output devices.

The actual colors to which each type of retinal cone is most sensitive:

enter image description here

Compare that to the typical sensitivity measurements of digital cameras (I’ve added vertical lines where our RGB – and sometimes RYGB – color reproduction systems output the strongest):

enter image description here

The Myth of “only” red, “only” green, and “only” blue

If we could create a sensor so that the “blue” filtered pixels were sensitive to only 420nm light, the “green” filtered pixels were sensitive to only 535nm light, and the “red” filtered pixels were sensitive to only 565nm light it would not produce an image that our eyes would recognize as anything resembling the world as we perceive it. To begin with, almost all of the energy of “white light” would be blocked from ever reaching the sensor, so it would be far less sensitive to light than our current cameras are. Any source of light that didn’t emit or reflect light at one of the exact wavelengths listed above would not be measurable at all. So the vast majority of a scene would be very dark or black. It would also be impossible to differentiate between objects that reflect a LOT of light at, say, 490nm and none at 615nm from objects that reflect a LOT of 615nm light but none at 490nm if they both reflected the same amounts of light at 535nm and 565nm. It would be impossible to tell apart many of the distinct colors we perceive.

Even if we created a sensor so that the “blue” filtered pixels were only sensitive to light below about 480nm, the “green” filtered pixels were only sensitive to light between 480nm and 550nm, and the “red” filtered pixels were only sensitive to light above 550nm we would not be able to capture and reproduce an image that resembles what we see with our eyes. Although it would be more efficient than a sensor described above as sensitive to only 420nm, only 535nm, and only 565nm light, it would still be much less sensitive than the overlapping sensitivities provided by a Bayer masked sensor. The overlapping nature of the sensitivities of the cones in the human retina is what gives the brain the ability to perceive color from the differences in the responses of each type of cone to the same light. Without such overlapping sensitivities in a camera’s sensor, we wouldn’t be able to mimic the brain’s response to the signals from our retinas. We would not be able to, for instance, discriminate at all between something reflecting 490nm light from something reflecting 540nm light. In much the same way that a monochromatic camera can not distinguish between any wavelengths of light, but only between intensities of light, we would not be able to discriminate the colors of anything that is emitting or reflecting only wavelengths that all fall within only one of the the three color channels.

Think of how it is when we are seeing under very limited spectrum red lighting. It is impossible to tell the difference between a red shirt and a white one. They both appear the same color to our eyes. Similarly, under limited spectrum red light anything that is blue in color will look very much like it is black because it isn’t reflecting any of the red light shining on it and there is no blue light shining on it to be reflected.

The whole idea that red, green, and blue would be measured discreetly by a “perfect” color sensor is based on oft repeated misconceptions about how Bayer masked cameras reproduce color (The green filter only allows green light to pass, the red filter only allows red light to pass, etc.). It is also based on a misconception of what ‘color’ is.

How Bayer Masked Cameras Reproduce Color

Raw files don’t really store any colors per pixel. They only store a single brightness value per pixel.

It is true that with a Bayer mask over each pixel the light is filtered with either a “Red”, “Green”, or “Blue” filter over each pixel well. But there’s no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There’s a lot of overlap.² A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue. Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask.

Each photon vibrating at the corresponding frequency for a ‘red’ wavelength that makes it past the green filter is counted just the same as each photon vibrating at a frequency for a ‘green’ wavelength that makes it into the same pixel well.³

It is just like putting a red filter in front of the lens when shooting black and white film. It didn’t result in a monochromatic red photo. It also doesn’t result in a B&W photo where only red objects have any brightness at all.
Rather, when photographed in B&W through a red filter, red objects appear a brighter shade of grey than green or blue objects that are the same brightness in the scene as the red object.

The Bayer mask in front of monochromatic pixels doesn’t create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular wavelength of light is recorded) of various wavelengths by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different color filters used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.

What Is ‘Color’?

Equating certain wavelengths of light to the “color” humans perceive that specific wavelength is a bit of a false assumption. “Color” is very much a construct of the eye/brain system that perceives it and doesn’t really exist at all in the portion of the range of electromagnetic radiation that we call “visible light.” While it is the case that light that is only a discrete single wavelength may be perceived by us as a certain color, it is equally true that some of the colors we perceive are not possible to produce by light that contains only a single wavelength.

The only difference between “visible” light and other forms of EMR that our eyes don’t see is that our eyes are chemically responsive to certain wavelengths of EMR while not being chemically responsive to other wavelengths. Bayer masked cameras work because their sensors mimic the trichromatic way our retinas respond to visible wavelengths of light and when they process the raw data from the sensor into a viewable image they also mimic the way our brains process the information gained from our retinas. But our color reproduction systems rarely, if ever, use three primary colors that match the three respective wavelengths of light to which the three types of cones in the human retina are most responsive.

Can you match color levels of semi-overlapping layers in GIMP using a single pixel color?

I am putting together fluorescent images taken on a microscope. I have taken sequential images that overlap for anywhere between 20 – 50 pixels at the edges. I have set up grids so that I have perfect overlap between them. These images are taken at different focus depths because our slide holder is slightly bent (cannot get it fixed just yet) so the overlapping colors are slightly different.

I was wondering if there is a method in GIMP to take a single pixel (or several) in the overlapping region from one layer and match it in the same pixel to an adjacent layer so that the entire color level of the adjacent layer matches the first layer based on that pixel selection?

First image is my whole image with guides removed. You can see the change in color between the various overlapping layers.

Second image are two overlapping layers I separated. The thing rectangle on the right side of the left image, and the thin rectangle on the left side of the right image will overlap when put together. I would like to take one bright spot in one of these rectangular regions to alter my colors in the other layer using the same spot.

I only recently started using GIMP and I am poorly experienced in image editing in general.

Whole image, guides removed

Overlapping layers separated

collider – How to prevent the OnTriggerEnter2D function from being called for specific pairs of Layers in Unity?

How to prevent the OnTriggerEnter2D function from being called for specific pairs of Layers in Unity?

Is there maybe something like a trigger matrix similar to the collision matrix?

To avoid the x-y problem, let me describe my use case.

I have a lot of different colliders and triggers in my game. There are pairs for which I am interested in the OnTriggerEnter2D and the OnTriggerExit2D to be called and those for which I am not interested in calls. I can check the pairs inside the callbacks of course. But I thought if there is a better way to do it?

I do not have a specific issue in my case, I just want to broaden my knowledge. Since I can achieve the desired result for the OnColliderEnter2D and the OnColliderExit2D with the collision matrix, I thought that maybe the same can be done in the triggers case.