Integration of Squid Proxy to Anonymous Open Proxies to cache html response from Anonymous proxies

I need to Cache html response from anonymous proxies using squid caching server. However my requirement is something as given:
From client machine I need to connect to anonymous proxy with credentials IP and port. All my request are routed through local squid proxy server.
I tried with the given configurations on squid but not able to cache the response when I connect to the origin as following:
squidclinet -h <IP-Anonymous_Proxy> -p -u -w https://www.example.com

However I am able to cache using following method:
squidclient -h <IP-squid_proxy> -p -u -w

My squid.conf file# General

http_port 3128
visible_hostname Proxy
forwarded_for delete
via off

logformat squid %tg.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %(un %Sh/%<a %mt
access_log /var/log/squid/access.log squid

cache_dir aufs /var/cache/squid 1024 16 256
coredump_dir /var/spool/squid

acl QUERY urlpath_regex cgi-bin ?
cache deny QUERY

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|?) 0 0% 0

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network

acl SSL_ports port 443 # https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rync
acl Safe_ports port 80 8080 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
###Cache Peer
cache_peer parent 0 no-query default login=username:password
never_direct allow all

http_access allow all
icp_access allow all
#always_direct allow all

request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all

reply_header_access Via deny all
reply_header_access X-Cache deny all
reply_header_access X-Cache-Lookup deny all

dnd 5e – If I ready a action (spell) in response to a companion’s attack, what is a fair GM rulling over the order of events?

@Medix2 proposes a well-cited answer for the RAW scenario, but I will propose some solutions to turn this ruling in your party’s favor.

The cleric is putting a lot on that readied action: concentrating until their trigger occurs, and the potential loss of a spell slot if the trigger doesn’t occur. I think there are opportunities here to translate your cleric player’s intent into the game such that it both works the way they want, and plays nicely enough with the RAW.

From ‘Other Activity on Your Turn’ in the basic rules:

[…] You can communicate however you are able, through brief utterances and gestures, as you take Your Turn. […]

One way you could make this work, and potentially invite some combat roleplay, is to rule that the characters need to verbally coordinate such attacks, or create some other system of communicating this intent. The cleric holds their spell for the fighter’s signal and yells to the fighter ‘Let me know when!’; then the fighter, on their turn, yells to the cleric “Now!” while making a mad dash toward their target, sword in tow. This seems a clear enough ‘perceivable circumstance’ to act as a trigger, and eliminates the vague wording of “when the fighter starts to attack”.

If you have particularly roleplay-averse players, or if you want to introduce this method to them naturally, this character interaction could be described by you, the DM, to explain how the intended actions of the players can actually play out in the world in a way that is friendly with the rules. I feel that this is likely the best solution, as it both explains why the previous trigger didn’t work and sets a model to translate player intent to character action moving forward.

This ruling is less strictly rules-friendly, but is the way I rule readied action triggers in my own games. The term ‘perceivable circumstance’ is not a defined game term, and is subject to your interpretation. Because of this, I would allow an ambiguous trigger such as ‘when X starts to attack’, but consider the triggered spell/attack/movement a simultaneous effect with the attack. Xanathar’s Guide proposes an excellent way to handle Simultaneous Effects in Chapter 2:

[…] If two things happen at the same time on a character or monster’s turn, the person at the game table – whether player or DM – who controls that creature decides the order in which those things happen. […]

If you handle it this way, then the fighter can choose to allow the bolt to go off before their attack. And, as a bonus, if the cleric attempts to ready their action for when an enemy is about to attack, that enemy will probably decide that their attack resolves first, as the original readied action rules would have dictated. In RP terms, a cleric and a fighter who have been travelling and fighting together for some time would likely understand each others’ intents in combat better than they’d understand that random enemy they stumbled upon; in short: coordinating with teammates is easy, but enemies are unpredictable. This ruling changes very little about how readied actions work, yet allows a lot of Rules as Fun combat interactions that might not otherwise work.

Toggles awaiting response – User Experience Stack Exchange

I have toggles in my app that communicate with the backend. Is it bad to show a loader on click of the button until you have a response? It somehow doesn’t feel right.

Is it an option to just switch it on and if it fails turn it back to off? That also doesn’t seem right to me since it might make the user feel frustrated.

Am I not supposed to use toggles for something awaiting a response from the backend?

physics – 3D collision response advice for complex meshes

I am trying to use my own collision detection system in Unity to speed up my game. My problem is that when rigid bodies sit on top of static bodies, they will balance unrealistically on one edge or a corner. I really just need a better way of fixing this issue, but I will explain further.

I have tried adding a force to the point of collision to try and “settle” the rigid body into place, and this sort of works, it’s just that once it has settled, it vibrates wildly just sitting on top of a static mesh… I can get it to either slowly float down into a settled position or it does it fast but vibrates wildly after settling on a static mesh.

I would appreciate any ideas at all, even if they are bad ones. Thank you.

response time – Admin interface – use of “Please be patient”

In a recent update to one of our Admin interfaces (1 user), we added the following warning at the top, to remind the user that once they press the button, it can take several minutes to complete, and to avoid reloading the page and re-submitting, etc. (which has happened recently before the message was added)

enter image description here

The user, however, told us that they don’t get the point of saying “Please be patient”, especially in a Business/Admin interface, and asked if it’s implying that the person operating is “Impatient”. The user also mentioned that this message isn’t “helpful”, and is an incriminating statement.

I believe I’ve seen “Please be patient” written in other Software applications, and I thought this was a standard message to display when a process can take long / unpredictable time.

Was the use of “Please be patient” a bad idea here?
How would you formulate the message above?

python – How do you perform accumulation on large data sets and pass the results as a response to REST API?

I have around 125 million event records on s3. The s3 bucket structure is: year/month/day/hour/*. Inside each hour directory, we have files for every minute. A typical filename looks like this: yy_mm_dd_hh_min.json.gz
Each file contains subscription events in json. The subscription record has the following fields:

  • Time of creation
  • Time of arrival
  • User id
  • Valid_Until
  • And other user data such as: age, gender, country, state, etc. Consider these to be filters.

I was to find the derive the following things from the data based on a given date range:

  1. Opening active subscribers: The closing active of the previous day. Ofcourse the opening active of the first day is 0. The first day was 01/01/2017.
  2. Acquired Subscription: All the subscriptions that occurred on the day.
  3. Renewed Subscription: All subscriptions that occurred on that day by users who had subscribed before.
  4. Churned Subscription: All subscriptions that expired on that day.
  5. Closing Active subscribers: Opening active + Acquired subscription + Renewed subscription – Churned Subscription.

This closing active will be the opening active for the next day. So you see there is a recursive pattern here. Closing active needs the opening active. Opening active is the closing active of the previous day.

I was to provide a rest API, which upon receiving a date range, could provide these 5 metrics for each day in the date range, so that we could plot a graph for the same.

Approach 1:

The first approach was to run a batch process on the s3 data and calculate these results for each day and store them on a database. We used mongodb for storage (we tried Cassandra but didn’t get too far with it because we lacked expertise and the client was wanted the solution very quickly) and pyspark for data processing.
Upon the query to the REST API, the API would simply query MongoDB with a date range and get the results.
We ran the pyspark job on the entire data on s3 and once finished we would simply monitor new events and add the calculations to mongodb.

Problem with approach 1:

  1. There was a problem with backfilling. We were told to use time of creation for the calculation and sometimes, data that had been created along time ago would arrive late.
    Since, the late arriving data would impact the closing active of a previous day, the opening active and closing active of every other day after that would get affected.
    For this, we had a condition in the pyspark code. Every record that would have a difference of more than a day between the time of creation and time of arrival would be dealt by a different function. The function would update the calculations for a day and then update the calculations for every other day after that. The worst case was, we would get backfill data for the first day. Because after updating the calculations for the first day, we would have to update the calculations for every other day upto 02/01/2019.

This approach was painfully slow, but it was all good because things were happening in the background and did not impact the performance of the REST API. The REST API would simply yield correct results when the update would be complete.

  1. FILTERS. Like I mentioned above. We had user data such as age, gender, country, state, etc. We were told to filter these results based on these values. The REST API would now also receive filters along with dates. Now, this might seem like no problem at all at first glance. Simply apply the filters to the results returned by MongoDB. But the problem was the opening and closing active. The closing active would change based on filter and with that the opening and closing active of every other day after that would change. This would mean with every filter combination, we would have to recalculate the whole thing.

So with the introduction of filters, we could no longer store the calculated results on the database, because the calculations would change based on the filter, and that too for the whole data.

Approach 2:

Instead of storing calculations, we decided to store the entire data from s3 (125 million records) into MongoDB (We had to shard mongo). We simply could not store calculated results for each and every filter combination on MongoDB as the filters would keep growing with more user data getting added into the json. So we had to query the data source itself. So we decided to store the data into mongodb and once the data is on mongo we would use first apply the filters and then use aggregate queries to calculate the opening and closing active.

Problem with Approach 2:

Remember calculation of opening and closing active has to happen from the first record. This process took around 4 – 10 minutes in total.
Since a REST API cannot wait for that long, this process happened in the background. The results would be stored on REDIS as key value pairs and the front end would periodically keep querying another REST endpoint which would then query Redis for updates and provide the results.
This process was a hack, but it wouldn’t be accepted. The client wanted the latest data to appear first. The latest data would take the longest to calculate. This meant the client had to wait for 4-10 minutes or the latest correct calculations to appear.

Approach 3:

This approach was to use Pyspark dataframes for MongoDB and calculate the results. We would then do the same thing we did in approach 2. Upload the results asynchronously into Redis, For some reason, my boss thought it would work. Luckily, I never got to try this solution as I left the company.

So obviously, I lack expertise in the domain of big data. I went from building REST APIs to suddenly building these huge data systems which none of us in the company had any idea of. Obviously, I made a lot of bad choices in the design of the system.

I am currently working with Pyspark and Kafka a lot but still am no expert. I also have never encountered a scenario like this after that company. So I ask the community, what would be the correct approach to building a system to solve a problem like this.

java – Hiding sensitive information that is in the response

I have a method that needs to return List emails that one user has. I am looking for a solution how to hide the response from this specific method, because the emails of the user are very sensitive information in this case. I tried to add @JsonIgnore( org.codehaus.jackson.annotate.JsonIgnore; )to the get method inside User class and to the field also, but that is not the solution in my case. This is the method that i created

public List<String> getUsersEmails(String username){
    User user = userDao.getUserByUsername(username);
    if(user != null){
        if(user.getEmails() != null && !user.getEmails().isEmpty()){
            return user.getEmails();
        }
    }

     return null;
}

python – Can not acces to ADLS data, error ‘response’

Hello I’m using this code to access to a Data Lake gen 2 of Azure and read the csv:

from azure.datalake.store import core, lib, multithread
import pandas as pd

tenant_id = '<your Azure AD tenant id>'
username = '<your username in AAD>'
password = '<your password>'
store_name = '<your ADL name>'
token = lib.auth(tenant_id, username, password)
# Or you can register an app to get client_id and client_secret to get token
# If you want to apply this code in your application, I recommended to do the authentication by client
# client_id = '<client id of your app registered in Azure AD, like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx'
# client_secret = '<your client secret>'
# token = lib.auth(tenant_id, client_id=client_id, client_secret=client_secret)

adl = core.AzureDLFileSystem(token, store_name=store_name)
f = adl.open('<your csv file path, such as data/test.csv in my ADL>')
df = pd.read_csv(f)

When i insert all the correct data i recive this error:

----> 5 token = lib.auth(tenant_id, username, password)
      6 # Or you can register an app to get client_id and client_secret to get token
      7 # If you want to apply this code in your application, I recommended to do the authentication by client

/local_disk0/.ephemeral_nfs/envs/pythonEnv--------------/lib/python3.7/site-packages/azure/datalake/store/lib.py in auth(tenant_id, username, password, client_id, client_secret, resource, require_2fa, authority, retry_policy, **kwargs)
    148             raise ValueError("No authentication method found for credentials")
    149         return out
--> 150     out = get_token_internal()
    151 
    152     out.update({'access': out('accessToken'), 'resource': resource,

/local_disk0/.ephemeral_nfs/envs/pythonEnv-----------------/lib/python3.7/site-packages/azure/datalake/store/retry.py in f_retry(*args, **kwargs)
    104                     except:
    105                         pass
--> 106                 request_successful = last_exception is None or (response is not None and response.status_code == 401)  # 401 = Invalid credentials
    107                 if request_successful or not retry_policy.should_retry(response, last_exception, retry_count):
    108                     break

UnboundLocalError: local variable 'response' referenced before assignment

I have followed the indication and still gave me this error, someone knows how to solve?
The problems are the key or some configuration that i have to do on the data lake?