Cheatsheet for DSA

Discover the journey of building a comprehensive programming cheatsheet for Data Structures and Algorithms

10/14/20236 min read

black and silver laptop computer on table
black and silver laptop computer on table

In the world of competitive programming, Data Structures and Algorithms (DSA) are the backbone of problem-solving. Whether you're preparing for coding interviews, tackling complex programming tasks, or simply looking to strengthen your coding skills, having a reliable cheatsheet can be a game-changer. In this blog post, I'll share my journey of creating a comprehensive programming cheatsheet for DSA and LeetCode, with the goal of making it a valuable resource for you and the programming community.

The Motivation Behind the Cheatsheet

I believe in the power of knowledge sharing, and as I embarked on my journey to master DSA and LeetCode, I recognized the need for a structured and concise reference. The cheatsheet serves as a quick guide, summarizing key concepts and common coding patterns. It's designed to save time during problem-solving and provide clarity when dealing with complex algorithms.

The cheatsheet follows a logical structure, making it easy to navigate. It includes the following sections:

Arrays and Strings:

  • Pros:

    • Fast and constant-time access by index.

    • Efficient memory usage for a fixed size.

  • Cons:

    • Fixed size (not dynamic).

    • Costly insertion/deletion in the middle (shift required).

  • Time Complexity:

    • Access: O(1)

    • Search: O(n)

    • Insertion/Deletion (at end): O(1)

    • Insertion/Deletion (at arbitrary position): O(n)

Linked Lists:

  • Pros:

    • Dynamic size (easy insertion/deletion).

    • Memory-efficient (no wasted memory).

  • Cons:

    • Slower access times (O(n)).

    • Additional memory overhead for pointers.

  • Time Complexity:

    • Access: O(n)

    • Search: O(n)

    • Insertion/Deletion (at head): O(1)

    • Insertion/Deletion (at arbitrary position): O(n)

Stacks and Queues:

  • Pros:

    • Efficient for managing function calls (stack).

    • FIFO order for queues.

  • Cons:

    • Limited random access (stack).

    • Queues may require resizing (array-based).

  • Time Complexity (for both stacks and queues):

    • Push/Enqueue: O(1)

    • Pop/Dequeue: O(1)

    • Peek: O(1)


Hash Maps:

  • Pros:

    • Fast key-value lookup.

    • Versatile and widely used.

  • Cons:

    • Hash collisions may occur.

  • Time Complexity:

    • Insertion/Access/Deletion (average): O(1)

Recursion:

  • Pros:

    • Elegant for solving complex problems.

    • May lead to shorter, more readable code.

  • Cons:

    • May result in high memory usage.

    • Slower in some cases.

  • Time Complexity varies depending on the problem.

Trees:

  • Pros:

    • Efficient searching, insertion, deletion (BST).

    • Balanced trees guarantee logarithmic operations.

  • Cons:

    • Not ideal for ordered datasets with lots of deletions.

    • Memory overhead (pointers).

  • Time Complexity:

    • Traversal (in-order, pre-order, post-order): O(n)

    • Search/Insertion/Deletion (BST): O(log n) on average, O(n) worst case

    • Heapify: O(n)

Graphs:

  • Pros:

    • Modeling complex relationships.

    • Multiple representations for different use cases.

  • Cons:

    • Traversal can be slow (especially DFS on dense graphs).

    • Space complexity depends on the representation.

  • Time Complexity:

    • Traversal (BFS, DFS): O(V + E)

    • Shortest path (Dijkstra's, Bellman-Ford): O(V^2) or O(E log V) for Dijkstra's, O(VE) for Bellman-Ford

    • Minimum spanning tree (Prim's, Kruskal's): O(E log V)

Sorting Algorithms:

  • Pros, Cons, and Time Complexity:

    • Bubble Sort:

      • Pros: Simple, space-efficient.

      • Cons: Slow for large datasets.

      • Time Complexity: Best O(n), Average O(n^2), Worst O(n^2)

    • Selection Sort:

      • Pros: Simple, space-efficient.

      • Cons: Slow for large datasets.

      • Time Complexity: Best O(n^2), Average O(n^2), Worst O(n^2)

    • Insertion Sort:

      • Pros: Simple, efficient for small datasets.

      • Cons: Slow for large datasets.

      • Time Complexity: Best O(n), Average O(n^2), Worst O(n^2)

    • Merge Sort:

      • Pros: Stable, efficient for large datasets.

      • Cons: Extra memory required.

      • Time Complexity: Best O(n log n), Average O(n log n), Worst O(n log n)

    • Quick Sort:

      • Pros: Efficient in practice, in-place.

      • Cons: Worst-case behavior (unoptimized).

      • Time Complexity: Best O(n log n), Average O(n log n), Worst O(n^2)

    • Heap Sort:

      • Pros: In-place, no extra memory.

      • Cons: Not as fast as Quick Sort for small datasets.

      • Time Complexity: Best O(n log n), Average O(n log n), Worst O(n log n)

Searching Algorithms:

  • Pros, Cons, and Time Complexity:

    • Linear Search:

      • Pros: Simple, works on unsorted data.

      • Cons: Linear time complexity.

      • Time Complexity: Best O(1), Average O(n), Worst O(n)

    • Binary Search:

      • Pros: Efficient on sorted data.

      • Cons: Requires sorted data.

      • Time Complexity: Best O(1), Average O(log n), Worst O(log n)

    • Hashing:

      • Pros: Fast access using keys.

      • Cons: Collisions may occur (requires collision resolution).

      • Time and Memory Complexity depends on the specific hashing technique.

Dynamic Programming:

  • Pros:

    • Solves problems with overlapping subproblems efficiently.

    • Optimizes time complexity by storing results (memoization).

  • Cons:

    • Requires more memory due to storing solutions.

  • Time and Memory Complexity varies based on the specific problem and approach.

Greedy Algorithms:

  • Pros:

    • Simple and efficient for some optimization problems.

    • Often provide near-optimal solutions.

  • Cons:

    • May not guarantee the global optimum.

    • Choice of greedy strategy can be critical.

  • Time and Memory Complexity varies based on the specific algorithm and problem.

Tree Traversal

  • In-order Traversal:

    • technique:

      • Visit the left subtree

      • Visit the current node

      • Visit the right subtree

    • In-order traversal visits the nodes in a binary tree in the order of left-child, current node, right-child.

    • Use Cases:

      • In-order traversal is often used to visit nodes of binary search trees (BSTs) in ascending order. It helps you retrieve data in sorted order.

      • When you need to process nodes in a way that depends on the order of traversal, such as validating a BST.

  • Pre-order Traversal:

    • technique:

      • Visit the current node

      • Visit the left subtree

      • Visit the right subtree

    • Pre-order traversal visits the current node before its children, i.e., current node, left-child, right-child.

    • Use Cases:

      • When you want to create a copy of a tree (deep copy) or serialize a tree into a format that can be deserialized.

      • In scenarios where you need to perform operations on the current node before its descendants.

  • Post-order Traversal:

    • technique:

      • Visit the left subtree

      • Visit the right subtree

      • Visit the current node

    • Post-order traversal visits the current node after its children, i.e., left-child, right-child, current node.

    • Use Cases:

      • When you want to delete a tree or free up memory, post-order traversal ensures that child nodes are processed before the current node.

      • In scenarios where you need to compute some aggregate value from the descendants before processing the current node.

Brute Force Algorithms:

  • Brute force algorithms are straightforward but inefficient methods to solve problems. They work well for small input sizes.


  • Divide and Conquer Algorithms:

    • Divide and conquer algorithms solve problems by breaking them into smaller subproblems and combining the solutions.

  • Heaps and Heapsort:

    • Heaps are efficient for managing priority queues. Heapsort is a sorting algorithm with a guaranteed time complexity.

Graph search: Depth-First Search (DFS):

  • DFS explores a graph by traversing as far as possible along each branch before backtracking.

  • It uses a stack or recursion for implementation.

  • Use Cases:

    • Detecting cycles in a graph.

    • Topological sorting.

    • Solving puzzles like mazes.

  • Breadth-First Search (BFS):

    • BFS explores a graph by visiting all the neighbors of a node before moving on to their neighbors.

    • It uses a queue for implementation.

    • Use Cases:

      • Finding the shortest path in an unweighted graph.

      • Web crawling.

      • Social network analysis.

  • Dijkstra's Algorithm:

    • Dijkstra's algorithm finds the shortest path in a weighted graph with non-negative edge weights.

    • It uses a priority queue or min-heap for implementation.

    • Use Cases:

      • Finding the shortest path in road networks.

      • Network routing.

      • GPS navigation.

  • A Search Algorithm:*

    • A* is an informed search algorithm that combines the benefits of BFS and heuristics to find the shortest path.

    • It uses a priority queue or min-heap for implementation.

    • Use Cases:

      • Pathfinding in video games.

      • Robot navigation.

      • Map applications.

  • Bellman-Ford Algorithm:

    • Bellman-Ford is used to find the shortest path in a graph with negative edge weights and detect negative cycles.

    • It uses dynamic programming and relaxation.

    • Use Cases:

      • Network routing with the possibility of negative weights.

      • Detecting negative cycles in financial networks.

  • Floyd-Warshall Algorithm:

    • Floyd-Warshall finds the shortest paths between all pairs of nodes in a weighted graph.

    • It uses dynamic programming and a matrix.

    • Use Cases:

      • Computing all-pairs shortest paths in a network.

      • Identifying reachability in directed graphs.

  • Depth-Limited Search (DLS):

    • DLS is a variant of DFS that limits the depth of exploration.

    • It's used to avoid infinite loops in cyclic graphs.

    • Use Cases:

      • Iterative deepening depth-first search (IDDFS).

  • Uniform-Cost Search (UCS):

    • UCS is a variant of Dijkstra's algorithm used in graph traversal with variable edge costs.

    • It considers the cost of reaching a node.

    • Use Cases:

      • Navigation applications with varying road conditions.

  • Bidirectional Search:

    • Bidirectional search explores the graph from both the start and target nodes and meets in the middle.

    • It's used to reduce search space and improve efficiency.

    • Use Cases:

      • Finding shortest paths in large graphs.

  • Greedy Best-First Search:

    • Greedy best-first search selects the node closest to the target based on a heuristic.

    • It doesn't guarantee the shortest path but can be efficient.

    • Use Cases:

      • Heuristic-based search problems.

The Evolution of the Cheatsheet

I acknowledge that learning and mastery are ongoing processes. As I continue my journey, I plan to enhance this cheatsheet by:

  • Adding more data structures and algorithms

  • Providing in-depth explanations and real-world use cases

  • Offering solutions to specific LeetCode problems

  • Incorporating visual aids and diagrams for better understanding

How You Can Benefit

My vision is to create a cheatsheet that benefits both beginners and experienced programmers. Whether you're preparing for technical interviews, honing your coding skills, or simply looking for a quick reference, this cheatsheet is designed to be your go-to resource. It's not just about the code but also about understanding the "why" behind it.

Conclusion

Creating a comprehensive programming cheatsheet is a labor of love, and I'm excited to share this journey with you. As I continue to build and refine this resource, your feedback and suggestions are invaluable. Together, we can create a powerful tool that simplifies the world of DSA and LeetCode, making it more accessible to all.

Stay tuned for updates, and let's embark on this learning journey together.

You can reach me, Sharareh Keshavarzi, via:

Happy coding!

Sharareh