c – Which searching algorithm will be better?

Count the number of iterations for both algorithms(binary search and linear search algorithm) on the same array using C language.

#include <stdio.h>
 int main()
      {
        printf("solve");
       }

python – Making This “Fancy” Sort Algorithm a Little Better

This is my code:

def add_to_destination_array(destination_arr, src1, iSrc1, src2, iSrc2):
    temp_array_length = len(destination_arr)
    while len(destination_arr) - temp_array_length < len(src1) + len(src2):
        if iSrc1 < len(src1):
            if iSrc2 == len(src2):
                destination_arr.append(src1(iSrc1))
                iSrc1 += 1
            else:
                if src1(iSrc1) <= src2(iSrc2):
                    destination_arr.append(src1(iSrc1))
                    iSrc1 += 1
                elif src2(iSrc2) <= src1(iSrc1):
                    destination_arr.append(src2(iSrc2))
                    iSrc2 += 1
        elif iSrc2 < len(src2):
            if iSrc1 == len(src1):
                destination_arr.append(src2(iSrc2))
                iSrc2 += 1
            else:
                if src2(iSrc2) <= src1(iSrc1):
                    destination_arr.append(src2(iSrc2))
                    iSrc2 += 1
                elif src1(iSrc1) <= src2(iSrc2):
                    destination_arr.append(src1(iSrc1))
                    iSrc1 += 1


def fancy_sort(array_to_sort):
    destination_array = ()
    source1 = (array_to_sort(0))
    source2 = ()
    iSource1 = 0
    iSource2 = 0
    got_first_sublist = False
    first_sublist_start = 0
    second_sublist_start = 0
    array_sorted = False

    for i in range(1, len(array_to_sort)):
        if len(source1) == 0:
            source1.append(array_to_sort(i))
            first_sublist_start = i
        elif not got_first_sublist:
            if source1(i-first_sublist_start-1) <= array_to_sort(i):
                # Add elements to the first sublist
                source1.append(array_to_sort(i))
            else:
                # First sublist found, now we start finding the seconds sublist
                source2.append(array_to_sort(i))
                second_sublist_start = i
                got_first_sublist = True
        else:
            if source2(i-second_sublist_start-1) <= array_to_sort(i):
                # Add elements to the second sublist
                source2.append(array_to_sort(i))
            else:
                # Start adding elements to the destination_array
                add_to_destination_array(destination_array, source1, iSource1, source2, iSource2)
                
                # Reset the variables and start over
                first_sublist_start = i
                source1 = (array_to_sort(i))
                source2 = ()
                iSource1 = 0
                iSource2 = 0
                got_first_sublist = False
    
    # If it didn't completely finish "sorting"
    if len(source1) > 0:
        if len(source1) == len(array_to_sort):
            array_sorted = True
        elif len(source2) > 0:
            add_to_destination_array(destination_array, source1, iSource1, source2, iSource2)
        else:
            for i in source1:
                destination_array.append(i)
    
    # Call the function again to continue sorting
    if not array_sorted:
        return fancy_sort(destination_array)
    return array_to_sort

sorting_array = (31, 72, 32, 10, 95, 50, 25, 18)


print(f"Before: {sorting_array}")

sorted_array = fancy_sort(sorting_array)

print(f"After:  {sorted_array}")

I know it’s not the best, but it still works. This algorithm is supposed to be similar to the merge sort, but it doesn’t really split the array, and then merges them together. This algorith is supposed to find two already sorted sub-arrays (or sub-lists if you want to call it that), and then “merge” them into another array (called the destination array).

Here is the info I was given to make create this algorithm:

Algorithm in Plain English
A teacher is trying to alphabetize a collection of papers. She picks up the papers in her hand and, starting at the top of her stack, works her way down until she finds the first paper out of order. That sub-stack is sorted, and is set aside. She does the same thing to find the next sorted sub-stack. These two sub-stacks are then combined into a single sorted stack that she places on the table. She continues through the original stack in her hand, combining pairs of sorted sub-stacks and putting the results on top of the stack on the table. When she is finished, only the stack on the table remains. Now she picks up the stack on the table and again searches for sorted sub-stacks. When she finds a pair of these, they are combined and placed on the table again. This process continues until again all the papers are on the table. When she picks the stack up off of the table and everything is sorted (there is only one sorted sub-stack), then she is done!

Detailed Description of the Algorithm
Start at the beginning of the array and continue until you discover the first element that is not in sorted order. This sub-array may consist of one element, or it may consist of hundreds. We will call this sub-array source1. Now, starting in the slot after source1, find the next sub-array. Again, this may consist of one element or hundreds. We will call this sub-array source2. Note that source1 and source2 are sorted individually. These two sub-arrays correspond to the small piles in the previous paragraph.

Now we will combine these two sub-arrays to form a single sorted sub-array. We will not do this in place. Instead, we will move them into a new sub-array called destination. Now since the two source arrays are sorted, we can assume that the smallest element in the two arrays is at the beginning of one of the two source arrays. In other words, it is at source1(0) or source2(0). Therefore destination(0) must be filled with either source1(0) or source2(0). If, for example, source2 has the lower of the two elements, then that element is moved to destination and the index of source2 (which we will call iSource2) is incremented by one. This process is continued until each element from source1 and source2 is moved into destination.

Once the first two sorted sub-arrays are moved to the destination array, then the next two sub-arrays are identified and combined into the destination array. This process continues until each element in the source array is moved into the destination array. Here, we can say that we have completed one pass of the source array.

Note that the sort is not finished yet. We have not sorted the array when we have completed just one pass. The only thing that a single pass accomplishes is to double the size of the sorted sub-arrays. We have to do many passes until the entire destination array becomes a single sorted sub-array. At that point, we can say that the array is sorted.

enter image description here

I am really wanting to go down to just using the array_to_sort and destination_array and then just use the indices for the other stuff instead of the two different source arrays.

Here is a working example.

Help me understand this terrain intersection algorithm

I was looking for a fast way to get a ray intersection point of a terrain defined by a heightmap and I stumbled upon this: https://publications.lib.chalmers.se/records/fulltext/250170/250170.pdf

At part 3.2, I don’t quite understand why would we get an intersection point in the beginning of the while loop since it starts with the first quadtree node which is basically the whole map thus there shouldn’t be an intersection point with the ray (expect if the ray starts out of AABB, higher than the max height of the terrain but that is never the case for me).

Thank you very much if someone could make this thing clear.

algorithm – Assessing BigO/small o for forming a dynamic dictionary based on searched key and values pair

I am trying to create a dictionary with a file containing text based on a matched pattern. Lines containing key_str should become keys and subsequent lines not matching key_str should become values and get associated with keys in the dictionary. so i have below code working: But I need help and getting the Big O analysis. How could i say my logic is worth case, Best Case or good case? Also, which case is small o and which is big O

File: file2dict.result-soa1

ml1
/var
/home

cpuml2
/var
/home


Output

my_dict: {ml1: ('/var','/home'), cpuml2: ('/var','/home')}

Code:

import os
homedir = os.environ.get('HOME')

key_str = "ml"
my_dict = {}
val_list = ()
key = ''
with open(homedir + '/backup/file2dict.result-soa1') as file2dict:
    for line in file2dict:
        words = line.split()
        for aWord in words:
            if key_str in aWord:
                if key:
                    my_dict(key) = val_list
                    val_list = ()
                    key = aWord
                else:
                    key = aWord
        else:
            val_list.append(aWord)
    my_dict(key) = val_list
print(my_dict)

optimization – Why do I need to use the golden ratio in the Golden Section Search algorithm?

The following diagram illustrates essentially how GSS optimization works:
enter image description here

  • $a$ is the lower-bound and $b$ is the upper-bound. $x_1$ and $x_2$ are values in between $a$ and $b$.

  • If $f(x_2) > f(x_1)$, then $x_2 = a$ in the next iteration. If $f(x_2) < f(x_1)$, then $x_1 = b$ in the next iteration.

  • $d = x_1 – a = b – x_2$ This is to circumvent the “Law of Bad Luck” so that the new interval length, $d$, after every iteration is the same regardless of which bound gets moved.

Question 1: Why does $d = phi*(b-a)$? Why couldn’t I just use any fraction like: $d = 0.75*(b-a)$?

Question 2: How do we derive the golden ratio as our faction of choice to define $d$ each iteration?

According to this resource:

$frac{a + b}{b} = M$, $frac{b}{a} = N$, and $M=N$

How do we know that $M = N$ is true? Do we have to assume that to be true? If so, why?

complexity theory – Efficient algorithm to compute the diameter of a convex set?

Is there a polynomial algorithm that can compute the diameter (the distance between the furthest points) of a convex set?

It is possible to do it efficiently for a set of points, but imagine that the set is described by the intersection of linear equations in high dimensional space, so the number of vertices of the set can be exponential in the number of inequalities.

Given a direction, we can easily compute the distance of furthest points along that direction using linear programming. The question is how do we find that direction?

Sub routine for an algorithm

The data in the table below has been collected by a teacher at the end of two unit assessments. Produce a subroutine for an algorithm that could be used to count the number of students who achieved a grade 8 in both assessments, the data is stored in parallel arrays and array indexing begins at 0

Ryan. 5. 5
Liam. 8. 8
Sophie. 5. 7
Micheal 3. 4
Robert 8. 8
Rebecca. 5. 3

context free – CYK algorithm in theory of computation

For any context free grammar, there is a parser that takes atmost n^3 time to parse a string of length n.

Doubt: I marked it false in a national level exam.I think it should be any null-free context free grammar and not any cfg because we first have to convert cfg to cnf but only null free cfg can be converted to cnf.

Is this statement correct?