Java – Should algorithms and data structures in business web applications be avoided?

I am creating a Java and Spring web application that will delete data from a web and then publish it through an API. Some of the scrapped raw data are in the form of Set I then convert that to one Set, The thing is, there is a field that in many cases has a repeating value SomeObject and I want to avoid converting it several times – it's not an expensive operation, but still.

So far I have found two solutions:

  • Extract this field SomeObject to a Map>that's easy to do from the web I'm scrapping (like 5 lines of code).
  • Leave the field inside SomeObject and create an internal service with a cache to convert this field, which is also easy with Spring.

The possible values ​​are not that many; 7 values ​​cover 95% of the entities, but sometimes there are some other values ​​than the ones I can analyze.

I like the second solution in that the code gets cleaner, not a complicated map and set structure, but in my opinion it is also a more complex one that is not really necessary.

Which do you think is better?

Bluetooth – is it possible to customize video / audio synchronization for items in a web browser?

My Bluetooth headphones have a slight delay, probably around 250 ms, but they seem to vary depending on the day. Otherwise, I like them, but it makes watching movies very annoying because the lips don't match the sound.

When I watch movies in VLC, I can easily adjust the audio / video sync so that it appears in a line. Is it possible to do the same in any browser? Youtube seems to compensate for this (somehow), but other less demanding websites like Disney + are not smart enough. If I could only delay the video by about 200 ms, it would be in a row.

Is that possible? I've seen some extensions that create and sync a second hidden video element this way, but that seems wasteful.

I'm not interested in solutions that try to diagnose the root cause. If anyone knows how to stream Disney + via VLC (or use VLC as a media player in a browser) I would be interested.

Precise measurement of the execution times of ASP.NET Core 3.x actions (web API project)?

I want to be able to log the time spent by a particular web API action in an ASP.NET Core 3.x application.

This is a very old ASP.NET question based on global action filters, but in ASP.NET Core I think middlewares are more appropriate.

From a customer perspective, I want to measure the following time as accurately as possible:

Time to first byte - Time spent to send the request

So I implemented the following with a slightly modified code from c-sharpcorner:

/// 
/// tries to measure request processing time
/// 
public class ResponseTimeMiddleware
{
    // Name of the Response Header, Custom Headers starts with "X-"  
    private const string ResponseHeaderResponseTime = "X-Response-Time-ms";

    // Handle to the next Middleware in the pipeline  
    private readonly RequestDelegate _next;

    ///
    public ResponseTimeMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    ///
    public Task InvokeAsync(HttpContext context)
    {
        // skipping measurement of non-actual work like OPTIONS
        if (context.Request.Method == "OPTIONS")
            return _next(context);

        // Start the Timer using Stopwatch  
        var watch = new Stopwatch();
        watch.Start();

        context.Response.OnStarting(() => {
            // Stop the timer information and calculate the time   
            watch.Stop();
            var responseTimeForCompleteRequest = watch.ElapsedMilliseconds;
            // Add the Response time information in the Response headers.   
            context.Response.Headers(ResponseHeaderResponseTime) = responseTimeForCompleteRequest.ToString();

            var logger = context.RequestServices.GetService();
            string fullUrl = $"{context.Request.Scheme}://{context.Request.Host}{context.Request.Path}{context.Request.QueryString}";
            logger?.LogDebug($"(Performance) Request to {fullUrl} took {responseTimeForCompleteRequest} ms");

            return Task.CompletedTask;
        });

        // Call the next delegate/middleware in the pipeline   
        return _next(context);
    }
}

Startup.cs (insert middleware)

public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILoggerFactory loggerFactory,
    ILoggingService logger, IHostApplicationLifetime lifetime, IServiceProvider serviceProvider)
{
    app.UseResponseCaching();

    app.UseMiddleware();

    // ...
}

Is that a good approach? I am mainly interested in accuracy and do not waste server resources.

Store usernames and passwords Web Hosting Talk

It's Friday … little jokes …

What is wrong with people who do not know their own username and password or feel like they are obliged to know their own username and password?

Don't get me wrong, I'm using a password manager and no, I can't tell you what all of my passwords are without opening this password manager. But at least I understand that there is an obligation to know my password. I do not expect to access my bank details on my bank's website just because I and the website should know that I am me.

Perhaps people store their passwords in their browsers and are simply dependent on them being filled in automatically correctly (I do not agree with this practice), but is there no obligation to know what this password is? Did you keep some where? In a password manager? In a notebook in your desk drawer? Some where.

I'm right?

20 WEB 2.0 and 10 DA 50+ backlinks for $ 10

20 backlinks WEB 2.0 and 10 DA 50+

Campaign details * 20 Web 2.0 blogs (dedicated accounts) High quality Web 2.0 backlinks, unique articles contain contextual backlinks with your exact keyword as an anchor! Will be sent to your link (s) / keyword (s). * 10 DA (Domain Authority) 50+ All links of this service come from DA 50 (Domain Authority) locations with high DA values ​​and a complete detailed report, including all links / accounts created

(tagsToTranslate) backlinks (t) seo (t) high (t) da (t) backlinks

Secure your website web scanner | Get 20% OFF | ESDS VTMScan – advertising, offers

Standards also help because we are fighting to ensure that the costs of sharing do not outweigh the advantages


A cartoon published a long time ago in The New Yorker summed it up: "Nobody on the Internet knows that you are a dog." If this cartoon had been written today, the headline could be, "Nobody on the internet knows you're a scam."

Scammers, snake oil sellers, sock puppets, bot armies and bullies – every time we look up we seem to discover a different form of dishonesty that has grown to a global scale through the great but terrifying combination of internet and smartphone.

None of this should surprise us. People are wonderful and terrible. The network that we have built for ourselves serves both the honest and the liar. But we have no infrastructure to manage a planet of thieves.

Navigating through this stuff goes far beyond the "reservation" and goes into the darkest secrets of spear phishing and social engineering, which for the simplest of reasons play on our higher self. It is no longer an African prince who offers you a hundred million dollars for your help. It is a customer who has carefully recorded all of her transactions and registration numbers in a Word document, which she received in a very helpful email.

Security has been taken to extremes. If things go on as before, the cost of connectivity could outweigh the benefits, and at that point, the already frayed post-web civilization of sharing and knowledge would relax fully as people and businesses retreat and call behind defensible borders it is a day.

All of this served as a subtext at the 26th International Conference on the World Wide Web – never spoken, but always in mind. In a broader sense, it's all the flaw of the web – the shadow of its culture of sharing. So could it be a problem that the web can fix?

This question preoccupied the hundreds of doctoral students who presented papers and posters at the conference. In so far as the contributions submitted by the Internet's core research community are a reliable indicator of the future direction of the Internet, the future will focus on learning how to recognize lies.

Detection of false advertising, bullies and bots – all of this can be learned by machine. It can even be applied to a politician's tweets – to find out if and when they are clear about where they were.

This flood of research is returning to one of the oldest problems in computer science – the Turing test. Can you tell if someone on the other end of a text-based connection is a person or a computer? What questions do you ask? How do you analyze your answers? Take the same ideas and apply them to a provider on Alibaba or an account on Twitter – ask the questions, analyze and review them – and then decide: truth or lie.

When Sir Tim Berners-Lee won the ACM A.M. Turing Award last week, the timing for this next development of his web couldn't be more appropriate. The web must build a meta-layer of error checking and truth-finding. This will likely slow things down a bit, even though we feel more confident that the counterfeit can be suppressed.

This will never be as true as we would like it to be. Once a lie-detecting system becomes widespread, the least honest and smartest will work to undermine this algorithmic determination of the truth, find its weaknesses, and take advantage of it. It was always like that; In the long run, the search for true will has always been an act of persistence and dedication.

Machines can help us in this fight – but machines are used on both sides to deceive and uncover fraud. Still, there is hope: there is too much money on the table to allow the forces of darkness to ascend. Chaos is bad for business.

Any alignment of trade to the common good is a rare and effective combination, which means that the resources for this struggle will be available in the foreseeable future. These students, with their fraud and bot detection algorithms, are picked up by the giant companies, whose profits depend on a web that is true for trading. What is good for Google and Facebook is good for the rest of us.

8 – How do I display the current page title in the branch template of a web form?

I've seen some questions about this (this is the most interesting I could find) but couldn't find a satisfactory answer.

I have a Drupal website where the webmaster can create a certain type of content. There is an embedded web form on this content type that adjusts the subject field so that the webmaster can know where the web form was sent from: the subject would look like this New submission has been made - (page title), With Drupal tokens, I could get this thing going without a problem, but I also need to display the page title in the branch template being sent.

So far I've tried the following:

- (current-page:title) {# without webform_token #}

- {{ webform_token('(current-page:title)') }} {# displays '(current-page:title)' #}

- {{ current_page }} and {{ current_page.title }} {# doesn't display anything #}

- {{ webform_token('(current-page:title)', current_page) }} {# Can't even save the change, I get an ajax error #}

So … is there a solution? I am considering using the "create a hidden box with the title of the current page" solution, but my lead developer will not be found by this solution unless there are no other options

thank you in advance

Set up and run a geo-targeted, low bounce, 6-month web traffic campaign for $ 6

Set up and run a low bounce geo-targeted web traffic campaign for 6 months

KEYWORD TARGETED Website traffic with a low bounce rate and long visits for 6 months

►300 + daily visitors for 6 months
★ Target group-oriented visitors from our advertising network with thousands of visitors for a high-quality audience – distributed around the clock on your website
★ High visitor diversity, which can have a positive impact on your website
★ No proxy and VPN visitors (we have the strongest filters)


$ 6 SERVICE FEATURES

  • 300+ KEYWORD VISITORS PER DAY FOR 6 months ON 1x URL
  • TARGET 5x KEYWORDS on 1x SEARCH ENGINE: GOOGLE, YAHOO or BING
  • WORLDWIDE or GEO-TARGETED visitors from: USA, EUROPE, ASIA or AFRICA
  • TARGET VISITOR PLATFORM: DESKTOP, MOBILE or MIXED
  • LOW BOUNCE RATUS (0-20%) *
  • LONG VISIT DURATION (> 60 seconds) **
  • All orders are processed within 48 hours, regardless of how many orders are in the queue.

Please note: In order to maintain high quality in our network, we strictly do not accept this
Websites with pornography, exit popups, frame breakers (except service
extra is added), all illegal content, social media pages (with
Exceptions, contact us), illegal download / streaming sites, fraud, fraud,
Copyright infringement, gambling, Java applets, hacking, fraud,
fishy stuff and websites that redirect to such content. We reserve the right to
Reject websites that we consider inappropriate. Of course we cannot
Guarantee for an increase in sales, clicks, leads or interaction from
the visitors with your website.

(tagsToTranslate) Web (t) Traffic (t) USA (t) Targeted (t) WebTraff