Algorithms – Consumer Producer Issue / Pipes and Filters

In computer science, in operating systems or their design and internals, there are:
Producer-consumer problem
Reader writer
Limited buffer

The software architecture again teaches the concept of pipes and filters.

Indication:
They have a god class with over 100 functions. Each is called with an argument as input (i.e., message) and returns an output (i.e., transformed message).

To process:
You divide it into 100 classes. Everyone is an independent producer or consumer or both producer and consumer at the same time. The communication between them is via a queue (publish subscribe).

How do you solve the remaining challenge, the challenge of burden-sharing? The producer generates 100 msg / s and the consumer accepts 50 msg / s (assuming he expects bounds, not I / O-bound, more threads would expect more).

A slower consumer is recognized by the growing queue, which means that messages are piling up. They have a multi-core CPU and for starters, the consumer is a single thread.

Comment or answer how to achieve automatic scaling of a slow consumer from one thread (computer bound) to multiple threads (or even downscale from multiple threads of the same consumer to one thread if the queue is not full enough to use all) -threads).

8 – How can a token be generated for the consumer with simple_oauth?

I'm looking for a token with simple_oauth, but I've been lost by installing our Drupal. We use the Headless Lightning distribution with JSON: API, Consumer and Simple Oauth for a decoupled project. Previously, our middleware was not authenticated because it only restored published content. But now we have to request the authentication.

I add a secret to the consumer and specify the scope for a role that has permissions to execute requests that we need. Before I configure the middleware, I try my requests with curl. On the API Access main page you will find documentation in the header:

Anonymous access to the API is granted in the same way that Drupal allows anonymous access to content. Generally, published content is available, unpublished content. If your application needs more privileged access (for example, accessing unpublished content or creating new content), you need to authenticate yourself. Authentication includes a client associated with a role and a user assigned the same role as the client. Once you've set up a client and a user, you can get an access token like the following:

curl -X POST -d "grant_type=password&client_id=api_test{CLIENT_ID}&client_secret={SECRET}&username={USERNAME}&password={PASSWORD}" https://{YOURDOMAIN}/oauth/token

But I have no further information. I tried to make this request with information of my consumer:

curl -X POST -d "grant_type=password&client_id=3ed1bd1b-e25f-40af-aaed-54100eaf45d0&client_secret=test&username=API&password=test" https://local.test/oauth/token

But I have this error:

{"error":"invalid_client","error_description":"Client authentication failed","message":"Client authentication failed"}

I try this request:

curl -X POST -d "grant_type=client_credentials&client_id=3ed1bd1b-e25f-40af-aaed-54100eaf45d0&client_secret=test" http://local.test/oauth/token

But I have the same mistake. I do not understand why my client_id is not valid, and I can not find any documentation to generate a token with my consumer.

c ++ – Producer Consumer with Threads and Boost Ring Buffer

I have two threads, one is the producer and the other is the consumer. My consumer is always late (due to a costly function call simulated with Sleep in the following code), so I used ring buffers because I can afford to lose some events.

I'm looking forward to seeing if my lock is okay and general comments on the C ++ review.

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

std::atomic mRunning;
std::mutex m_mutex;
std::condition_variable m_condVar;

class VecBuf {
    private:
    std::vector vec;

    public:
    VecBuf() = default;
    VecBuf(std::vector v)
    {
        vec = v;
    }
};

std::vector data{ 10, 20, 30 };

class Detacher {
    public:
    template
    void createTask(Function &&func, Args&& ... args) {
        m_threads.emplace_back(std::forward(func), std::forward(args)...);
    }

    Detacher() = default;
    Detacher(const Detacher&) = delete;
    Detacher & operator=(const Detacher&) = delete;
    Detacher(Detacher&&) = default;
    Detacher& operator=(Detacher&&) = default;

    ~Detacher() {
        for (auto& thread : m_threads) {
            thread.join();
        }
    }

    private:
    std::vector m_threads;
};

void foo_1(boost::circular_buffer *cb)
{
    while (mRunning) {
        std::unique_lock mlock(m_mutex);

        m_condVar.wait(mlock, (=)() { return !cb->empty(); });

        VecBuf local_data(cb->front());
        cb->pop_front();
        mlock.unlock();
        if (!mRunning) {
            break;
        }
        //simulate time consuming function call and consume local_data here
        std::this_thread::sleep_for(std::chrono::milliseconds(16));
    }

    while (cb->size()) {
        VecBuf local_data(cb->front());
        cb->pop_front();
        if (!mRunning) {
            break;
        }
    }
}

void foo_2(boost::circular_buffer *cb)
{
    while (mRunning) {
        std::unique_lock mlock(m_mutex);

        while (cb->full()) {
            mlock.unlock();
            /* can we do better than this? */
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            mlock.lock();
        }
        cb->push_back(VecBuf(data));
        m_condVar.notify_one();
    }
}

int main()
{
    mRunning = true;
    boost::circular_buffer cb(100);
    Detacher thread_1;
    thread_1.createTask(foo_1, &cb);
    Detacher thread_2;
    thread_2.createTask(foo_2, &cb);
    std::this_thread::sleep_for(std::chrono::milliseconds(20000));
    mRunning = false;
}

Dell R610 with consumer SSDs in RAID 10

Hello

We have two old servers running Consumer Grade 1 TB SSD hard drives last year without any problems. However, we now want to use Intel DC Enterprise SSDs.

As a RAID 10 server, I would like to know that if we remove an SSD and use a different SSD with the same size but different brands, then it will. Speak of Crucial MX500 1 TB on Intel DC 960 GB and it will be rebuilt. Will it be successfully restored and can we continue until all drives are replaced by Intel DC SSDs?

Is this a possible option or a dangerous one

Extend the functionality of existing classes without damaging the consumer code

Here are a number of classes that are used to create from where Clause for SQL Server and Oracle for different field types, eg. text. numeric and date,

public interface IConditionBuilder
{
bool CanHandle (FilterAction filterAction);

string BuildCondition (SearchCondition filterCondition);
}

public abstract class ConditionBuilder : IConditionBuilder where TContext: FieldSearchContext
{
public abstract string OperatorSymbol {get; }

public string BuildCondition (SearchCondition searchCondition)
{
var conditionBuilder = new StringBuilder ();

var context = searchCondition.GetContext();

conditionBuilder.Append (context.FieldId);
conditionBuilder.Append (OperatorSymbol);
conditionBuilder.Append (GetValue (context));

return conditionBuilder.ToString ();
}

public abstract bool CanHandle (FilterAction filterAction);

public abstract object GetValue (TContext context);

}

public class TextLikeConditionBuilder: ConditionBuilder
{
public override string OperatorSymbol => "LIKE";

public override bool CanHandle (FilterAction action) => action == FilterAction.TextLike;

public override object GetValue (TextContext context)
{
if (context.Text == null)
{
return zero;
}

return string.Concat ("%", context.Text, "%");
}
}

public class TextEqualsConditionBuilder: ConditionBuilder
{
public override string OperatorSymbol => "=";

public override bool CanHandle (FilterAction action) => action == FilterAction.TextEqual;

public override object GetValue (TextContext context)
{
if (context.Text == null)
{
return zero;
}

return "& # 39;" + context.Text + "& # 39;";
}
}

public class NumericLessThanConditionBuilder: ConditionBuilder
{
public override string OperatorSymbol => " < ";

    public override bool CanHandle(FilterAction action) => action == FilterAction.NumericLessThan;

public override object GetValue (NumericContext Context)
{
return context.Number;
}
}

Public class DateGreaterThanAndLessThanEqualConditionBuilder: IConditionBuilder
{
public const string GREATER_THAN = ">";

public const string LESS_THAN_EQUAL = "<=";

public string BuildCondition (SearchCondition filterCondition)
{
var conditionBuilder = new StringBuilder ();

var context = filterCondition.GetContext();

conditionBuilder.Append (context.FieldId);
conditionBuilder.Append (GREATER_THAN);
conditionBuilder.Append ("& # 39;" + context.FromDate + "& # 39;");
conditionBuilder.Append (LESS_THAN_EQUAL);
conditionBuilder.Append ("& # 39;" + context.EndDate + "& # 39;");
return conditionBuilder.ToString ();
}

public bool CanHandle (FilterAction action) => action == FilterAction.DateGreaterThanLessThan;

}

I would like to extend the functionality to purge context.FieldId before creating it condition Statement for e.g. These classes create a statement like Name = & # 39; Aashish & # 39;I want the classes to create statement as [Name] = & # 39; Aashish & # 39;, These classes are used by other developers, so I do not want to restrict consumer functionality because of the changes I'm about to make. Basically, the open-closed principle applies. So, here's how I implemented these changes. Notice how I added a virtual feature SanitizeFieldId in the Condition Builder and DateGreaterThanAndLessThanEqualConditionBuilder,

public abstract class ConditionBuilder : IConditionBuilder where TContext: FieldSearchContext
{
public abstract string OperatorSymbol {get; }

public string BuildCondition (SearchCondition searchCondition)
{
var conditionBuilder = new StringBuilder ();

var context = searchCondition.GetContext();

conditionBuilder.Append (SanitizeFieldId (context.FieldId));
conditionBuilder.Append (OperatorSymbol);
conditionBuilder.Append (GetValue (context));

return conditionBuilder.ToString ();
}

public abstract bool CanHandle (FilterAction filterAction);

public abstract object GetValue (TContext context);

protected virtual string SanitizeFieldId (string fieldId)
{
return fieldId;
}
}

Public class DateGreaterThanAndLessThanEqualConditionBuilder: IConditionBuilder
{
public const string GREATER_THAN = ">";

public const string LESS_THAN_EQUAL = "<=";

public string BuildCondition (SearchCondition filterCondition)
{
var conditionBuilder = new StringBuilder ();

var context = filterCondition.GetContext();

conditionBuilder.Append (SanitizeFieldId (context.FieldId));
conditionBuilder.Append (GREATER_THAN);
conditionBuilder.Append ("& # 39;" + context.FromDate + "& # 39;");
conditionBuilder.Append (LESS_THAN_EQUAL);
conditionBuilder.Append ("& # 39;" + context.EndDate + "& # 39;");
return conditionBuilder.ToString ();
}

public bool CanHandle (FilterAction action) => action == FilterAction.DateGreaterThanLessThan;

protected virtual string SanitizeFieldId (string fieldId)
{
return fieldId;
}
}

public class SanitizedFieldConditionBuiler : ConditionBuilder Where: TContext: FieldSearchContext
{
private ConditionBuilder _baseConditionBuilder;
private IColumnSanitizer _columnSanitizer;

public SanitizedFieldConditionBuiler (ConditionBuilder baseConditionBuilder, IColumnSanitizer columnSanitizer)
{
_baseConditionBuilder = baseConditionBuilder;
_columnSanitizer = columnSanitizer;
}

public override string OperatorSymbol => _baseConditionBuilder.OperatorSymbol;

public override bool CanHandle (FilterAction filterAction) => _baseConditionBuilder.CanHandle (filterAction);

public override object GetValue (TContext context) => _baseConditionBuilder.GetValue (context);

protected override string SanitizeFieldId (string fieldId)
{
return _columnSanitizer.Sanitize (fieldId);
}
}

public class SanitizedDateFieldGreaterThanAndLessThanEqualConditionBuilder: DateGreaterThanAndLessThanEqualConditionBuilder
{
private IColumnSanitizer _columnSanitizer;

public SanitizedDateFieldGreaterThanAndLessThanEqualConditionBuilder (IColumnSanitizer columnSanitizer)
{
_columnSanitizer = columnSanitizer;
}

protected override string SanitizeFieldId (string fieldId)
{
return _columnSanitizer.Sanitize (fieldId);
}
}

I use extension methods to initialize SanitizedFieldConditionBuilerand SanitizedDateFieldGreaterThanAndLessThanEqualConditionBuilderAs shown below:

public static class extensions
{
public static SanitizedFieldConditionBuiler SanitizeField(This ConditionBuilder source, IColumnSanitizer columnSanitizer) where TContext: FieldSearchContext
{
Return the new SanitizedFieldConditionBuiler(Source, columnSanitizer);
}

public static SanitizedDateFieldGreaterThanAndLessThanEqualConditionBuilder SanitizeField (this IConditionBuilder source, IColumnSanitizer columnSanitizer)
{
return new SanitizedDateFieldGreaterThanAndLessThanEqualConditionBuilder (columnSanitizer);
}

}

Disinfection is available via an interface IColumnSanitizer and has two different implementations, for SQL Server and Oracle

    public interface IColumnSanitizer
{
string sanitize (string columnName);
}

public class SqlSanitizer: IColumnSanitizer
{
public string sanitize (string columnName)
{
Return "[" + columnName + "]";
}
}

public class OracleSanitizer: IColumnSanitizer
{
public string sanitize (string columnName)
{
return "" "+ columnName +"  "";
}
}

The following describes how to implement context classes:

public abstract class FieldSearchContext
{
public virtual string FieldId {get; }

protected FieldSearchContext (string fieldId)
{
FieldId = fieldId;
}
}

public class DateContext: FieldSearchContext
{
public DateContext (string fieldId, DateTime? fromDate, DateTime? endDate): base (fieldId)
{
FromDate = fromDate;
EndDate = endDate;
}

public DateTime? FromDate {get; }

public DateTime? EndDate {get; }
}

public class TextContext: FieldSearchContext
{
public TextContext (string fieldId, string text): base (fieldId)
{
Text = text;
}

public string text {get; }
}

public class NumericContext: FieldSearchContext
{
public NumericContext (string fieldId, decimal): base (fieldId)
{
Number = number;
}

public decimal {get; }
}

These changes work fine, but I want to find out if this can be achieved in a different and better way.

Use the following code to see it in action:

    Class program
{
static gap Main (string[] arguments)
{
var conditions = new List()
{
new SearchCondition (new NumericContext ("Numeric Field", 1234), FilterAction.NumericLessThan),
new search condition (new TextContext ("Text Field", "ASDF"), FilterAction.TextEqual),
new search condition (new TextContext ("Text Field", "QWERTY"), FilterAction.TextLike),
new search condition (new DateContext ("Date Field", DateTime.Now, DateTime.Now.AddYears (1)), FilterAction.DateGreaterThanLessThan)
};

Console.WriteLine (BuildWhereClause (Operation.AND, Conditions));
Console.Read ();
}

private static string BuildWhereClause (Operation operation, IList Conditions)
{
var returnValue = new list();
var conditionBuilders = new list()
{
new TextEqualsConditionBuilder (). SanitizeField (new SqlSanitizer ()),
new NumericLessThanConditionBuilder (). SanitizeField (new SqlSanitizer ()),
new TextLikeConditionBuilder (). SanitizeField (new SqlSanitizer ()),
new DateGreaterThanAndLessThanEqualConditionBuilder (). SanitizeField (new SqlSanitizer ())
};

foreach (var condition in conditions)
{
var context = condition.GetContext();
var conditionBuilder = conditionBuilders.FirstOrDefault (u => u.CanHandle (condition.FilterAction));
returnValue.Add (conditionBuilder.BuildCondition (condition));
}

if (returnValue.Any ())
return string.Join (Convert.ToString ("" + operation + ""), returnValue);

return string.Empty;
}
}

enum operation: int
{
AND = 1,
OR = 2
}

[GET] Digital Marketing Analytics: Making consumer data tangible in a digital world (Que Biz-Tech)

Good news: your competitors have none. It is difficult! But digital marketing analytics are 100% feasible, offering tremendous opportunities. and all data is accessible to you, Chuck Hemann and Ken Burbary help you get the problem right size, solve every piece of the puzzle, and integrate an almost smooth system for transitioning from data to decision making, from actions to results! Take advantage of the opportunities, choose your tools, learn to listen, set the right metrics and distill your digital data to get the most value for everything, from R & D to CRM to Social Media Marketing!
[​IMG]
* Prioritize – because you can not measure, hear and analyze everything
* Use the analysis to gain experience deeply reflect the needs, expectations and behaviors of each client
* Measure up real Social Media ROI: Sales, Leads, and Customer Satisfaction
* Track the performance of all paid, earned and own social media channels
* Use "Hördaten" far beyond PR and marketing: for strategic planning, product development and human resources
* Start optimizing web and social content Real time
* Implement advanced tools, processes, and algorithms to accurately measure impact
* Integrate paid and social data to make more of both
* Use surveys, focus groups and synergies from offline research
* Focus on new marketing and social media investments where they bring the most value

Foreword by Scott Monty
Global Head of Social Media at Ford Motor Company
DOWNLOAD

[GET] Digital Marketing Analytics: Making consumer data tangible in a digital world (Que Biz-Tech) | Proxies-free

Good news: your competitors have none. It is difficult! But digital marketing analytics are 100% feasible, offering tremendous opportunities. and all data is accessible to you, Chuck Hemann and Ken Burbary help you get the problem right size, solve every piece of the puzzle, and integrate an almost smooth system for transitioning from data to decision making, from actions to results! Take advantage of the opportunities, choose your tools, learn to listen, set the right metrics and distill your digital data to get the most value for everything, from R & D to CRM to Social Media Marketing!
[​IMG]

* Prioritize – because you can not measure, hear and analyze everything
* Use analysis to gain experience deeply reflect the needs, expectations and behaviors of each client
* Measure up real Social Media ROI: Sales, Leads, and Customer Satisfaction
* Track the performance of all paid, earned and own social media channels
* Use "Hördaten" far beyond PR and marketing: for strategic planning, product development and human resources
* Start optimizing web and social content Real time
* Implement advanced tools, processes, and algorithms to accurately measure impact
* Integrate paid and social data to make more of both
* Use surveys, focus groups and synergies from offline research
* Focus on new marketing and social media investments where they bring the most value

Foreword by Scott Monty
Global Head of Social Media at Ford Motor Company
DOWNLOAD

Streaming – Configure Spark as a kafka consumer problem SCALA

hello all here are my code I try to configure sparks as a kafka consumer, but I have the error exception
First problem is that the web UI binds to 0.0.0.0 or a native ip: 4040, which I can not find in the browser issue. I will write it in the lower section. Thanks for your help:

######################################## ####### ## #### "" ""

import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
object Teal extends App {
val spark = Spark Session
.builder ()
.master ("local")[[[[]")
.appName ("teal")
.getOrCreate ()
import spark.implicits._
val df = spark.readStream
.format ("kafka")
//.setMaster("local[[[[
]")
.option ("kafka.bootstrap.servers", "127.0.0.1:9090")
.option ("subscribe", "test")
.Burden()
df.selectExpr ("CAST (key AS STRING)", "CAST (value AS STRING)")
.as[(String, String)]
val query = df.writeStream
.outputMode ("complete")
.format ("console")
.Begin()
}

######################################## ####### ## ############### "

Problem is: Exception in thread "main" org.apache.spark.sql.AnalysisException: The data source could not be found: kafka. Deploy the application according to the staging section of the Structured Streaming + Kafka Integration Guide.
at org.apache.spark.sql.execution.datasources.DataSource $ .lookupDataSource (DataSource.scala: 652)
at org.apache.spark.sql.streaming.DataStreamReader.load (DataStreamReader.scala: 161)

java – Problem Solving for Consumer Producer with JDBI, MySQL, HikariCP

The app is assuming to solve the producer-consumer problem. It would be great to have feedback on the overall design, testability, and general guidance for further improvement. I have doubts about choosing the right approach when testing a temporary database failure.

@ Slf4j
public class DataSource {

private static final string CONFIG_FILE = "src / main / resources / db / db.properties";
private static HikariConfig config = new HikariConfig (CONFIG_FILE);

private static HikariDataSource ds;

static{
To attempt {
ds = new HikariDataSource (config);
} catch (CJCommunicationsException | HikariPool.PoolInitializationException e) {
log.info (e.getMessage ());
}
}

static class holder {
static DataSource INSTANCE = new DataSource ();
}

private DataSource () {}

public static DataSource getInstance () {
return Holder.INSTANCE;
}

public static Connection getConnection () triggers SQLException {.
return ds.getConnection ();
}

public static HikariDataSource getDs () {
Return ds;
}
}

Quit Google+ for personal (personal) accounts on April 2, 2019

This is a discussion about Quit Google+ for personal (personal) accounts on April 2, 2019 within the Search Engine optimization Forums, part of the Internet Marketing category; Shut down Google+ for private (personal) accounts on April 2, 2019 https: //support.google.com/plus/answ…=&&& authuser = 0 …

,