hakk

software development, devops, and other drivel
Tree lined path

Learn

Big O Time & Space

The concept of Big O notation is fundamental in computer science and software engineering for analyzing and comparing the efficiency of algorithms and data structures. It provides a standardized way of describing the upper bounds of the time and space complexity of an algorithm or data structure as the input size grows.

Big O Time Complexity:

Big O time complexity describes the worst-case scenario for the amount of time an algorithm takes to complete as a function of the input size (n). It represents an upper bound on the time required for an algorithm to execute, ignoring constant factors and lower-order terms. In simpler terms, it tells us how the runtime of an algorithm scales with the input size.

Read more...

Dynamic Programming (DP)

Dynamic Programming (DP) is a problem-solving technique used to solve problems by breaking them down into simpler subproblems. It is particularly useful when the problem can be divided into overlapping subproblems, meaning that the same subproblems are solved repeatedly. DP stores the results of solved subproblems in a table (usually an array or a matrix) to avoid redundant computations and improve efficiency.

Here are some key concepts and characteristics of dynamic programming:

Read more...

Hash Tables

Hash tables, also known as hash maps or dictionaries in some programming languages, are data structures that store key-value pairs. They use a technique called hashing to efficiently retrieve and store data based on keys.

Here’s a basic overview of how hash tables work:

  1. Hash Function: A hash function is used to convert keys into array indices. It takes a key as input and computes a hash code, which is typically an integer. This hash code is then mapped to an index in the hash table’s underlying array.

    Read more...

Heap Data Structure

The heap data structure is a binary tree-based data structure that satisfies the heap property. It is commonly implemented as an array, where the parent-child relationships are determined by the indices of the array elements. There are two main types of heaps: max heaps and min heaps.

Max Heap: In a max heap, every parent node has a value greater than or equal to the values of its children. The maximum value is stored at the root of the heap.

Read more...

Merge Sort

Merge Sort is another efficient sorting algorithm that follows the divide-and-conquer strategy. It divides the input array into two halves, recursively sorts each half, and then merges them back together.

Here’s how Merge Sort works:

  1. Divide: Divide the unsorted array into two halves.

  2. Conquer: Recursively sort each half.

  3. Merge: Merge the two sorted halves into a single sorted array.

  4. Base Case: The base case of the recursion is when the sub-array has zero or one element, in which case it is already considered sorted.

    Read more...

Quick Sort

Quick Sort is a widely-used sorting algorithm known for its efficiency and simplicity. It follows the divide-and-conquer strategy to sort an array or list of elements.

Here’s how Quick Sort works:

  1. Partitioning: Choose a pivot element from the array. Rearrange the array so that all elements less than the pivot come before it, and all elements greater than the pivot come after it. After this partitioning step, the pivot element is in its final sorted position.

    Read more...

Recursion

Recursion is a programming technique where a function calls itself directly or indirectly to solve a problem. It’s a powerful and elegant approach commonly used to solve problems that can be broken down into smaller, similar subproblems.

Here are some key points about recursion:

  1. Base Case: Recursion involves breaking down a problem into smaller instances of the same problem. Each recursive function call works on a smaller or simpler version of the original problem. Eventually, the problem becomes small enough that it can be solved directly without further recursion. This stopping condition is called the base case, and it prevents the recursion from continuing indefinitely, leading to a termination of the recursive process.

    Read more...

Stack vs. Heap Memory

Stack and heap are two memory regions used by computer programs to manage memory allocation, each with its own characteristics and purposes.

Stack Memory:

  • Purpose: The stack is used for static memory allocation and stores variables that are declared and initialized in functions and methods. It is a region of memory that follows a Last-In-First-Out (LIFO) structure.
  • Allocation: Memory on the stack is automatically allocated and deallocated as functions are called and return. When a function is called, space is allocated on the stack for local variables, function parameters, return addresses, and other bookkeeping information.
  • Access: Access to stack memory is fast and efficient because it involves simple pointer manipulation. Local variables can be accessed directly by their memory addresses relative to the stack pointer.
  • Size Limitations: The size of the stack is typically limited and fixed, defined by the operating system or runtime environment. Exceeding the stack size can lead to a stack overflow, causing program termination.
  • Lifetime: Stack-allocated memory is short-lived and automatically freed when the function or scope in which it is allocated exits.

Heap Memory:

Read more...

Binary Search

Binary Search is a searching algorithm for finding an element’s position, if it exists, in a sorted array.

Implementation

Complexity

Time Complexities

Read more...

Queue Data Structure

A Queue is a linear data struture that is open at both ends. One end is always used to insert data (enqueue) and the other is used to remove data (dequeue). In can be thought of as a queue of poeple waiting to get tickets, the first person in line will be the first one called to purchase their ticket.

Waiting in Queue
Waiting in Queue

Basic Operations