linux – What is the practical approach for CIS Benchmark (Ensure permissions on all logfiles are configured)?

I am trying to harden RHEL with CIS benchmark. One of the items states the following:

Ensure permissions on all logfiles are configured

Description: Log files stored in /var/log/ contain logged information
from many services on the system, or on log hosts others as well.

Rationale: It is important to ensure that log files have the correct
permissions to ensure that sensitive data is archived and protected.
Other/world should not have the ability to view this information.
Group should not have the ability to modify this information.

performance – Simple Python Benchmark

The following is a quick attempt at testing some limits of my computer:

import multiprocessing
import time


def main():
    numbers = tuple(multiprocessing.Value('Q', 0, lock=False) for _ in range(multiprocessing.cpu_count()))
    processes = tuple(multiprocessing.Process(target=count, args=(number,)) for number in numbers)
    for process in processes:
        process.start()
    time.sleep(10)
    for process in processes:
        process.terminate()
    print(sum(number.value for number in numbers))


def count(number):
    while True:
        number.value += 1


if __name__ == '__main__':
    main()

Without changing the overall design (adding one to a variable as many times as possible within a certain time limit), is there a way to improve the performance of the code? In this case, having a higher number printed out on the same computer is better.

benchmarking – Make Simple Benchmark on Accademics Compiler

I hope this question can is inside the boundaries with forum line guides.

I developed an academics compiler in a common compiler class, and on the report, I like to put some information about the time compiling, from when I invoke the compiler to eI have an executable file.
I don’t care about the execution time of the executable because it is generated with llvm and a microbenchmark could have not to sense without a real comparison with some other code generated from a different implementation of the same compiler.

In summary, my question is, there are tools like Google Benchmark that help me to make some measuring on my compiler wrote in Ocaml (but the language at the moment is not important, I like to exchange info about this field)

c++ – Framework to benchmark your program memory consumption

To benchmark performance of my cpp code I use google benchmark. As we all know, it runs code I want to benchmark many times and prints the time. In combination with profiler, this is super useful tool.
The question is if there is such a tool for memory benchmarking? My particular need is to develop a reproducible procedure to measure the memory used to perform some computations. Right now I use valgrind tools in a manual regime, I’m sure that some people has automated this.

$100 a day seems to be the benchmark

A little about myself

I have been a member here since may of 2010.

During that time I have been reading and making failed attempt after failed attempt. In the process I lost my job and went into a deep depression. My wife divorced me to be with another man and I was only able to see my kids every other weekend. To top it off I reached 400 pounds. What a bummer!

I truly hit rock bottom. I needed to pick myself up and do something about it. The first thing I did was only blame myself. I realize when you blame others you are only masking what the real problem is. TO make a long story short here is my current situation.

I have a job working for my ex-father-in-law

I am back with my ex-wife and enjoying her and my kids

I lost 60 pounds although I gained 20 back since being back with my ex-wife (spring will be here and it will drop again like crazy)

I am back to building websites

Now for my IM journey

I started looking for ways to make money online and dabbled with this and that throughout the years. I became a little more serious about it in 2009 when I was laid off my job. I gave it a half assed attempt the last three years. Now I am ready to give it a go the right way.

I realized all the jumping around was no good for me. I needed a firm action plan. Now I have a short term plan and long term plan.

My short term goal and why I picked it
Niche websites

I am going to be building niche websites with either adsense, amazon, or clickbank products on it. I am also in the process of developing my own SEO service website. I plan on using this to generate some instant income while the niche websites will generate long term income.

I picked this method since it seems to be something I really can do and I have proof I have been building websites for my ex-father-in-law and he let me put my adsense account on his sites. I am not making much from them, but I am getting at least a few cents a day. In fact I calculated it and it is an average of 87 cents a day. Sure that is nothing to write home about, but it tells me this is a doable system. Oh yeah that was 87 cents a day on a couple of poorly optimized sites and piss poor keyword research. Now I am working on building 1 site a day if I can or at least 1 site in two days. Each site will have 5-10 pages of content on it. I want to be completely random. I am building content using IAW combined with TBS. With those two software’s I can create a 500 word article that is unique in about 5 to 10 mins. Then I post it to my site and optimize it in another 5 mins. I also make one super spun article for AMR. I also have proxy goblin and captcha sniper. I figure with these tools I should be able to create and back link a site in one day.

Once I start earning about 10 to 20 dollars per day I will work on my SEO services site. I hope to offer keyword research, article creation, backlinks, and build websites for others as well. I just want to make sure I have the process down before I offer such service.

Long term goal

I will start to use flippa to multiply my earnings. I should be able to sell my lower earning niche sites for a decent profit. Then I will sell my SEO services website with all the customers and tell how I do everything on there to make things run smoothly. I will then turn to creating WSO’s and expect it to be found here for free.

———————————————————————

This is completely a side note. I am thinking about creating a how to do keyword research for free e-book. I have bought a few different software packages and downloaded a couple from here and read countless e-books on the subject. Now I finely cracked the code on how to do effective keyword research and retrieve EMD for the newly found keyword. I used to spend hours and find nothing. Now I can find something in a few mins. I have a process I follow and it helps me find these golden little nuggets.

Here is one little hint. Most people look for at least 1000 local searches a month and at least $1 cpc. Let us assume 1% click on your ad. Adsense pays about 68% which means you would earn about $6.80 in one month. (good thing we can rank for more then one keyword in the search engines) Now lets say we filtered out anything lower then a dollar. I came along and put the filter for anything above .50 cents cpc. I find a keyword with 5k searches a month with .53 cents per click. according to my calculations with the above example I would make 18.02. That is almost 3 times as much as the minimum requirement most people have.

Of course other factors take place such as ranking the same page on multiple keywords. Plus, ad placement for different click through rates. My point is not to follow the heard unless you want to follow them to the slaughter house.

performance – Benchmark of a query execution’s time elapsed versus the number of rows retrieved

We currently have a Financial report that does a lot of complex aggregate functions, joins around 10+ tables and runs for around 3.5 hours and retrieves around 500,000 records.
One of our clients wants this report to generate faster, but our development manager thinks that 3.5 hours is reasonable considering our current table structures.

I know a query’s performance would largely depend on the number of joins, indexes, partitions and aggregate functions, etc. But I’m curious to know if there’s a benchmark of an acceptable execution time for a certain number of rows retrieved. I remember someone stating that there’s a document about it but I’ve searched all over and couldn’t find any.

Perhaps someone here has any articles to share?

Thank you!