A I 1


Problem 1 : Worst Case Time Complexity

Let's break down the given code step by step:

def fun(arr):
    n = len(arr)  # Step 1: Find the length of the array
    if n % 2 == 0:  # Step 2: Check if the length is even
        while n > 1:  # Step 3: If even, repeatedly divide n by 4 until n becomes 1 or less
            n = n // 4
            print(arr[n])  # Step 4: Print the element at the new index 'n'
    else:  # Step 5: If n is odd
        for i in range(0, n):  # Step 6: Loop over the entire array
            print(arr[i])  # Step 7: Print each element

Code Behavior:

  1. Input: The function takes an array arr as input.

  2. Length Calculation: n = len(arr) calculates the number of elements in the array.

  3. Even Case (n % 2 == 0):

    • If the array length is even, the code enters a while loop where it repeatedly divides n by 4 until n becomes 1 or smaller.

    • It prints the element at index n after each division.

  4. Odd Case (n % 2 != 0):

    • If the array length is odd, the code uses a for loop to iterate over all elements and prints each one.


Time Complexity Analysis

Case 1: When n is Even

The code enters a while loop where n is repeatedly divided by 4. The loop condition is n > 1, so let's calculate how many times n can be divided by 4.

  • The while loop divides n by 4 at each step, which means that the loop will run until n becomes less than or equal to 1.

  • The number of times you can divide n by 4 is approximately:

log4(n)=log2(n)2log_4(n)=\frac{\log_2(n)}{2}

Time Complexity:

  • Since the loop runs for logrithmic iterations, the time complexity for the even case is O(log n).

Case 2: When n is Odd

In this case, the code enters a for loop that iterates over each element in the array. The loop runs from i = 0 to i = n - 1, so it performs n iterations.

Time Complexity:

  • Since the loop iterates through the entire array, the time complexity for the odd case is O(n).


Combined Time Complexity

  • Even Case: O(log n) (due to the logarithmic behavior of dividing n by 4 in the while loop).

  • Odd Case: O(n) (due to the linear iteration through the array).

Since the time complexity depends on whether n is odd or even, we can't directly compare the two cases. Therefore, the time complexity will be:

  • O(n) when n is odd.

  • O(log n) when n is even.

In the worst-case scenario, when the length of the array is odd, the time complexity will be O(n), because the code needs to iterate through every element.


Explanation of How Time Complexity is Calculated

To find the time complexity of a function, you need to analyze the number of times loops are executed or recursive calls are made relative to the input size n. In this case, the time complexity is determined by how many times the loop runs, either iterating through the whole array or reducing the problem size by a factor in each step.

  • For the even case, the while loop reduces n exponentially (dividing by 4), so the number of iterations is logarithmic.

  • For the odd case, the for loop directly iterates through the array, so the number of iterations is linear.

Thus, time complexity depends on the structure of the loops and how they manipulate the input size during execution.

Problem 2: Identical Elements

Best or Worst or Average Case Time Complexity of Sorting a List of Identical Elements Using Insertion Sort: When sorting a list of identical elements with insertion sort, the best-case time complexity is O(n).

class Sorter:
    def __init__(self, data):
        self.data = data
    
    def insertion_sort(self):
        n = len(self.data)
        if n < 1:
            return self.data
        
        # Counting the number of comparisons
        comparison_count = 0
        
        for i in range(1, n):
            j = i
            while j > 0:
                comparison_count += 1
                if   self.data[j] >= self.data[j - 1]:
                    break  # List is already sorted, so break out of the loop early
                elif self.data[j] < self.data[j - 1]: # Swap elements
                     self.data[j], self.data[j - 1] = self.data[j - 1], self.data[j]
                j -= 1
        print(f"Total comparisons: {comparison_count}")
        return self.data

# Example usage with identical elements
L = [7, 7, 7, 7, 7]
sorter = Sorter(L)
sorted_list = sorter.insertion_sort()
print(sorted_list)

Problem 3: Recurrence Relation and Time Complexity of Quick Sort

The worst-case time complexity for Quick Sort occurs when the chosen pivot divides the list into highly unbalanced partitions, such as when the pivot is always the smallest or largest element in the sublist. In this scenario, one of the partitions contains nearly all the elements, while the other partition contains none or just one element. This results in the algorithm taking more time to sort the list.

Recurrence Relation for Quick Sort

In the worst case, if the partitioning always produces one partition of size 1 and the other partition of size n−1, the recurrence relation is:

T(n)=T(n1)+O(n)T(n)=T(n−1)+O(n)

Where:

  • T(n) is the time complexity for sorting nnn elements.

  • T(n−1) is the time complexity for sorting n−1 elements (since one element is partitioned out, the remaining elements are recursively sorted).

  • O(n) is the time taken for partitioning (comparing each element to the pivot).

This recurrence expands as follows:

T(n)=T(n1)+O(n)T(n)=T(n−1)+O(n)
T(n1)=T(n2)+O(n1)T(n−1)=T(n−2)+O(n−1)
T(1)=O(1)T(1)=O(1)

Summing these gives:

T(n)=O(n)+O(n1)+O(n2) +O(1)T(n) = O(n) + O(n-1) + O(n-2) \dots \ + O(1)

This results in:

T(n)=O(n2)T(n) = O(n^2)

Worst-Case Time Complexity: O(n^2)

Thus, the worst-case time complexity for Quick Sort is O(n^2).

Example of Worst-Case

Let's take an example where the array is already sorted, and the pivot is always the smallest element:

Array: [1,2,3,4,5]

  1. First Partition: Pivot = 1

    • Left Sublist: Empty

    • Right Sublist: [2,3,4,5]

  2. Second Partition: Pivot = 2

    • Left Sublist: Empty

    • Right Sublist: [3,4,5]

  3. Third Partition: Pivot = 3

    • Left Sublist: Empty

    • Right Sublist: [4,5]

  4. Fourth Partition: Pivot = 4

    • Left Sublist: Empty

    • Right Sublist: [5]

In this case, each partition step involves comparing all nnn elements to the pivot, and we perform this partitioning nnn times. Thus, the time complexity is O(n^2).

Average Case Time Complexity: O(nlog⁡n)

In the average case, Quick Sort performs much better. If the pivot divides the array into two roughly equal parts, the recurrence relation is:

T(n)=2T(n2)+O(n)T(n)= 2T\left(\frac{n}{2}\right) + O(n)

This solves to:

T(n)=O(nlogn)T(n)=O(nlog⁡n)

Explanations: The recurrence relation for Quick Sort in the average and best-case scenarios is:

T(n)=2T(n2)+O(n)T(n) = 2T\left(\frac{n}{2}\right) + O(n)

This recurrence relation implies that:

  • We are dividing the problem into two subproblems, each of size n2\frac{n}{2}​ .

  • The term O(n)O(n) accounts for the time it takes to perform the partitioning (i.e., the time taken to divide the array based on the pivot).

Solving the Recurrence Relation: We can solve this using the recursion tree method or Master Theorem. Let's explore both approaches.


Approach 1: Using Recursion Tree

Let's build a recursion tree for T(n)=2T(n2)+O(n)T(n) = 2T\left(\frac{n}{2}\right) + O(n)

  1. At the first level (original problem):

    • Work done: O(n)O(n)

    • Subproblems: 2 subproblems of size ​ n2\frac{n}{2}

  2. At the second level:

    • For each subproblem T(n2)T\left(\frac{n}{2}\right) , the work done at this level is O(n2)O\left(\frac{n}{2}\right) for each subproblem, and there are 2 subproblems.

    • Total work: 2×O(n2)=O(n)2 \times O\left(\frac{n}{2}\right) = O(n)

  3. At the third level:

    • Each of the two subproblems from the second level splits into two more subproblems of size n4\frac{n}{4} ​.

    • Work done at this level: 4×O(n4)=O(n)4 \times O\left(\frac{n}{4}\right) = O(n)

  4. This process continues until the subproblems reach size 1.

In each level of the recursion tree, the total work is O(n)O(n) . Since we are halving the size of the subproblems at each level, the depth of the recursion tree is log2(n)\log_2(n) .

Thus, the total work done is:

T(n)=O(n)+O(n)+O(n)+ (log n levels)=O(nlogn)T(n) = O(n) + O(n) + O(n) + \cdots \text{ (log n levels)} = O(n \log n)

Approach 2: Using Master Theorem

The recurrence relation follows the general form of the Master Theorem:

T(n)=aT(nb)+O(nd)T(n) = aT\left(\frac{n}{b}\right) + O(n^d)

For Quick Sort:

  • a=2 (since there are two subproblems),

  • b=2 (since each subproblem is half the size of the original problem),

  • d=1 (since the partitioning step takes linear time O(n)O(n)

The critical exponent logba=log22=1\log_b{a} = \log_2{2} = 1 .

Now, we compare ddd with log⁡ba\log_b{a}logb​a:

  1. If d<logbad < \log_b{a} , the time complexity is O(nlogba)O(n^{\log_b{a}})

  2. If d=logbad = \log_b{a} , the time complexity is O(ndlogn)O(n^d \log n)

  3. If d>logbad > \log_b{a} , the time complexity is O(nd)O(n^d) .

In this case, d=logba=1d = \log_b{a} = 1 , so the time complexity is:

T(n)=O(n1logn)=O(nlogn)T(n) = O(n^1 \log n) = O(n \log n)

Thus, the recurrence relation:

T(n)=2T(n2)+O(n)T(n) = 2T\left(\frac{n}{2}\right) + O(n)

solves to:

T(n)=O(nlogn)T(n) = O(n \log n)

This shows that the average and best-case time complexity of Quick Sort is O(nlogn)O(n \log n) , which is significantly better than the worst-case time complexity O(n2)O(n^2).


Summary of Quick Sort Time Complexities:

  • Worst Case: O(n^2) (unbalanced partitions, e.g., sorted arrays).

  • Average Case: O(nlog⁡n) (balanced partitions).

  • Best Case: O(nlog⁡n)(perfectly balanced partitions).

Problem 4: Correct Order of Increasing Growth Rate

Let's arrange the given complexities in order of increasing growth rate:

  1. log(logn)log⁡(log⁡n)Double Logarithmic Growth

  2. lognlognlog⁡n⋅log⁡nLogarithmic Squared Growth

  3. nlognn⋅log⁡nLinearithmic Growth

  4. nnn \cdot \sqrt{n} ​ — Linear-Root Growth

Now, let's explain the reasoning behind this ordering:


1. log(logn) log⁡(log⁡n) — Double Logarithmic Growth

  • Growth Rate: This is the slowest-growing function in the list. Double logarithmic functions grow extremely slowly as nnn increases.

  • Explanation: The logarithm itself grows slowly, and applying the logarithm again to it makes the growth even slower. For example, for large n, log(logn)log⁡(log⁡n) barely increases compared to any other term in the list.

2. lognlognlog⁡n⋅log⁡n — Logarithmic Squared Growth

  • Growth Rate: This grows faster than log(logn) log⁡(log⁡n) , but still slower than any function involving n.

  • Explanation: The logarithm grows slowly, but squaring the logarithm makes it grow a bit faster than a single logarithmic term. It's still much slower than linearithmic or linear terms.

3. nlognn⋅log⁡n — Linearithmic Growth

  • Growth Rate: This is a combination of linear and logarithmic growth. While it grows faster than pure logarithmic functions, it is still slower than polynomial growth.

  • Explanation: Linearithmic functions like nlognn⋅log⁡n appear in many efficient algorithms (like Quick Sort and Merge Sort). The logarithmic term grows slowly, but when multiplied by nnn, the growth becomes faster than just logarithmic but slower than quadratic growth.

4. nnn \cdot \sqrt{n} ​ — Linear-Root Growth

  • Growth Rate: This is the fastest-growing function in this list. It is equivalent to n1.5n^{1.5} , meaning it's a sub-quadratic growth.

  • Explanation: This grows faster than n⋅log⁡nn \cdot \log nn⋅logn because n\sqrt{n} ​ grows faster than nlognn⋅log⁡n. When multiplied by nnn, it results in a growth rate larger than linearithmic but smaller than quadratic.


Final Ordered List (from slowest to fastest growth):

  1. log(logn)\log(\log n)

  2. lognlogn\log n \cdot \log n

  3. nlognn \cdot \log n

  4. nnn \cdot \sqrt{n}

This order is based on the relative growth rates of logarithmic, linearithmic, and root functions as nnn increases.

Problem 5 : Stack Usage

To check if a word is a palindrome (a word that reads the same forwards and backwards), we can use a stack to help with the reversal of the word. Since a stack operates on a Last In, First Out (LIFO) principle, we can push each character of the word into the stack and then pop the characters out to reverse the word. If the reversed word is the same as the original word, it's a palindrome.

Plan:

  1. Push each character of the word onto the stack.

  2. Pop each character from the stack to reverse the word.

  3. Compare the reversed word with the original word. If they are the same, the word is a palindrome.

Code Implementation:

class Stack:
    def __init__(self):
        self.stack = []  # Initialize an empty list to represent the stack

    def isempty(self):
        # Return True if the stack is empty, else return False
        return len(self.stack) == 0

    def push(self, v):
        # Add an element to the top of the stack
        self.stack.append(v)

    def pop(self):
        # Remove and return the top element of the stack if it's not empty
        if not self.isempty():
            return self.stack.pop()
        return None  # Return None if the stack is empty

    def __str__(self):
        # Custom string representation of the stack
        return str(self.stack)


def is_palindrome(word):
    # Create an instance of Stack
    stack = Stack()

    # Step 1: Push all characters of the word onto the stack
    for char in word:
        stack.push(char)

    # Step 2: Pop characters from the stack to get the reversed word
    reversed_word = ''
    while not stack.isempty():
        reversed_word += stack.pop()

    # Step 3: Compare the original word with the reversed word
    if word == reversed_word:
        return True
    else:
        return False


# Example usage
word1 = "racecar"
word2 = "hello"

print(f"{word1} is a palindrome: {is_palindrome(word1)}")  # Output: True
print(f"{word2} is a palindrome: {is_palindrome(word2)}")  # Output: False

Explanation:

  1. Stack Class:

    • This class is the same as the stack code from before. It allows us to push and pop elements from a stack (which is just a Python list).

  2. is_palindrome Function:

    • Step 1: We push all characters of the input word into the stack.

    • Step 2: We pop characters from the stack one by one, which effectively reverses the order of the characters, and we store these in a new string reversed_word.

    • Step 3: We compare the original word with the reversed word. If they are equal, the word is a palindrome, and we return True. Otherwise, we return False.

Example Output:

racecar is a palindrome: True
hello is a palindrome: False
  • Word 1: "racecar" is a palindrome because it reads the same forwards and backwards.

  • Word 2: "hello" is not a palindrome because it does not read the same forwards and backwards.

Time Complexity:

  • Pushing all characters into the stack: O(n), where n is the length of the word.

  • Popping characters from the stack: O(n), where n is the length of the word.

  • Total time complexity: O(n), where n is the length of the word.

This is an efficient way to check for palindromes using the stack data structure.

Problem 6: Evaluate Expressions

You can use the stack data structure to evaluate arithmetic expressions, particularly postfix expressions (also known as Reverse Polish Notation). In postfix notation, the operator comes after the operands, and it eliminates the need for parentheses in mathematical expressions.

Plan:

  1. Traverse the postfix expression from left to right.

  2. If an operand (number) is encountered, push it onto the stack.

  3. If an operator is encountered, pop the top two elements from the stack, apply the operator to them, and push the result back onto the stack.

  4. At the end of the expression, the result will be the only element left in the stack.

Example of Postfix Expression:

"3 4 + 2 * 7 /" This corresponds to the infix expression: ((3 + 4) * 2) / 7

Code Implementation:

class Stack:
    def __init__(self):
        self.stack = []  # Initialize an empty stack

    def isempty(self):
        return len(self.stack) == 0  # Check if the stack is empty

    def push(self, v):
        self.stack.append(v)  # Push an element onto the stack

    def pop(self):
        if not self.isempty():
            return self.stack.pop()  # Pop and return the top element
        return None  # If the stack is empty, return None

    def peek(self):
        if not self.isempty():
            return self.stack[-1]  # Return the top element without removing it
        return None

    def __str__(self):
        return str(self.stack)  # Custom string representation of the stack


def evaluate_postfix(expression):
    # Create an instance of Stack
    stack = Stack()

    # Split the expression by spaces to get individual elements (operands and operators)
    tokens = expression.split()

    for token in tokens:
        if token.isdigit():
            # If the token is a digit, push it to the stack
            stack.push(int(token))
        else:
            # If the token is an operator, pop the top two elements
            val2 = stack.pop()  # Second operand (top of stack)
            val1 = stack.pop()  # First operand (just below the top of stack)

            # Perform the operation based on the operator
            if token == '+':
                stack.push(val1 + val2)
            elif token == '-':
                stack.push(val1 - val2)
            elif token == '*':
                stack.push(val1 * val2)
            elif token == '/':
                stack.push(val1 // val2)  # Integer division for simplicity
            elif token == '^':
                stack.push(val1 ** val2)  # Exponentiation

    # The final result will be the only element left in the stack
    return stack.pop()


# Example usage
expression = "3 4 + 2 * 7 /"
result = evaluate_postfix(expression)
print(f"Result of the expression '{expression}' is: {result}")

Explanation:

  1. Stack Class:

    • This class remains the same as before. It allows us to push and pop values onto/from the stack.

  2. evaluate_postfix Function:

    • Step 1: We split the expression into tokens (numbers and operators).

    • Step 2: If a token is a number (operand), we push it onto the stack.

    • Step 3: If a token is an operator, we pop the top two numbers from the stack and apply the operator. The result is then pushed back onto the stack.

    • Step 4: At the end of the evaluation, the result is popped from the stack and returned.

Example Output:

pythonCopy codeResult of the expression '3 4 + 2 * 7 /' is: 2

This evaluates the expression ((3 + 4) * 2) / 7, which results in 2.

Postfix Expression:

  • Postfix: "3 4 + 2 * 7 /"

  • Infix: ((3 + 4) * 2) / 7

  • Steps:

    1. 3 + 4 = 7

    2. 7 * 2 = 14

    3. 14 / 7 = 2

Time Complexity:

  • Traversal of the expression: O(n), where n is the number of tokens in the expression.

  • Push and pop operations: Each push or pop is O(1), and since we perform these operations for each token, the overall complexity remains O(n).

Use Cases of Stack in Expression Evaluation:

  • Postfix (Reverse Polish Notation): Easier to evaluate using a stack because operators follow operands.

  • Infix Expressions: Stack can also be used for evaluating infix expressions by converting them to postfix.

  • Prefix (Polish Notation): Similar to postfix but with operators preceding operands.

This approach demonstrates how stacks can be used for efficient evaluation of arithmetic expressions, particularly in postfix notation, where the order of operations is naturally respected.

Last updated