Subjects

Subjects

More

Cool Stuff About Big O Notation and Data Structures

View

Cool Stuff About Big O Notation and Data Structures

Computer science fundamentals help us understand how to solve problems efficiently using different tools and techniques.

Big O notation for algorithm analysis is a way to measure how fast a program runs as the input size grows. Think of it like measuring how long it takes to find a book in different sizes of libraries. In a small library with 10 books, searching might be quick. But in a huge library with millions of books, some search methods will be much slower than others. Big O helps us compare these methods mathematically and choose the best one.

Applications of stacks and queues in computer science are essential data structures that organize information in specific ways. A stack works like a pile of plates - you can only add or remove from the top (Last-In-First-Out). This is useful in programs that need to track things like browser history or undo operations. Queues work like a line at a store - first person in line gets served first (First-In-First-Out). This helps manage tasks in order, like printing documents or handling customer service requests. Understanding binary search trees in data structures gives us a way to organize data so we can find things quickly. Imagine a tree where each branch point has a value, with smaller values going to the left and larger values to the right. This organization lets us find any value much faster than checking every item one by one. For example, to find a number between 1 and 1000, we can start at 500, then either go left if our number is smaller or right if it's larger, cutting the search space in half each time.

These concepts work together to help programmers create efficient solutions. When we understand how fast our programs run (Big O), how to organize data (stacks and queues), and how to search through information quickly (binary search trees), we can build better software that solves real-world problems effectively. This knowledge is crucial for creating everything from simple mobile apps to complex systems that handle millions of users.

1/23/2023

392

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Understanding Algorithm Complexity and Data Structures

Big O notation for algorithm analysis serves as the foundation for measuring algorithm efficiency. When analyzing algorithms, we must consider both time and space complexity to make informed decisions about implementation choices. This notation helps developers predict how an algorithm's performance will scale with increasing input sizes.

Time complexity classifications range from constant time O(1) to factorial time O(n!), with several important categories in between. Logarithmic time O(log n) algorithms, like binary search, demonstrate excellent scaling properties by repeatedly dividing the problem space in half. Linear time O(n) algorithms process each input element exactly once, while polynomial time O(n²) algorithms, common in nested loops, show quadratic growth.

Definition: Big O notation describes the upper bound of an algorithm's growth rate, helping us understand how it will perform with large inputs.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Search and Sort Algorithm Analysis

Searching algorithms demonstrate varying levels of efficiency based on their approach. Linear search examines each element sequentially with O(n) complexity, while binary search achieves O(log n) by repeatedly dividing the search space. Applications of stacks and queues in computer science enhance these operations through organized data management.

Sorting algorithms showcase different efficiency trade-offs. Bubble sort and insertion sort, while simple to implement, have O(n²) worst-case complexity. More sophisticated algorithms like merge sort maintain O(n log n) complexity even in worst-case scenarios, though they require additional space complexity considerations.

Example: Binary search requires a sorted array and repeatedly divides the search space in half, checking if the target value is in the lower or upper portion.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Data Structure Implementation and Usage

Stacks implement Last-In-First-Out (LIFO) behavior through push and pop operations, making them ideal for managing program execution and parsing expressions. Their applications include checking balanced parentheses and managing recursive function calls. Queue implementations support First-In-First-Out (FIFO) operations through enqueue and dequeue methods.

Linear queues face space utilization challenges as elements are removed, while circular queues optimize array usage by wrapping around to reuse empty positions. These structures find practical applications in print spooling, task scheduling, and simulation systems.

Highlight: Circular queues improve efficiency by reusing space, making them particularly valuable in memory-constrained environments.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Tree Data Structures and Traversal

Understanding binary search trees in data structures requires grasping fundamental tree properties. Trees represent hierarchical relationships through connected, acyclic graphs. Binary trees restrict each node to at most two children, enabling efficient searching and sorting operations.

Tree traversal algorithms provide different ways to visit all nodes systematically. Pre-order, in-order, and post-order traversals serve different purposes in data processing and analysis. Binary search trees maintain ordered relationships between nodes, supporting O(log n) search operations when balanced.

Vocabulary: A binary search tree maintains the property that all left child values are less than their parent, and all right child values are greater.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Understanding Tree Data Structures and Traversal Methods

A binary search tree represents a specialized hierarchical data structure that maintains elements in a sorted order, making search operations highly efficient. Unlike regular binary trees, BSTs follow strict ordering rules where the left subtree contains values smaller than the root node, while the right subtree holds larger values.

Trees serve multiple critical purposes in computer science applications. Compilers extensively use syntax trees to parse and process programming language code. The ordered nature of binary search trees in data structures enables faster searching compared to linear data structures. Additionally, trees provide an elegant way to represent and evaluate Boolean expressions by organizing operators and operands hierarchically.

Tree traversal algorithms determine how we systematically visit each node. Breadth-first traversal processes nodes level by level, starting from the root and moving horizontally before going deeper. This approach is particularly useful when you need to process nodes in order of their distance from the root. In contrast, depth-first traversal explores one complete branch before backtracking to process other branches, making it memory-efficient for deep trees.

Definition: A binary search tree is a binary tree where for each node, all elements in the left subtree are smaller than the node's value, and all elements in the right subtree are larger.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Advanced Sorting Algorithms and Their Applications

Sorting algorithms play a fundamental role in organizing data efficiently. The bubble sort algorithm, while simple to implement, repeatedly compares adjacent elements and swaps them if they're in the wrong order. Though inefficient for large datasets, bubble sort can be suitable for small lists or educational purposes.

Insertion sort builds a sorted portion of the array incrementally by taking elements from the unsorted portion and inserting them in their correct positions. This algorithm performs well on nearly sorted arrays and small datasets. The merge sort algorithm employs a divide-and-conquer strategy by recursively splitting the array into smaller subarrays, sorting them, and then merging them back together.

Quick sort, another efficient divide-and-conquer algorithm, works by selecting a pivot element and partitioning the array around it. Elements smaller than the pivot go to one side, while larger elements go to the other. This process continues recursively until the entire array is sorted. Quick sort generally offers better performance than bubble and insertion sort for large datasets.

Highlight: While bubble sort has O(n²) complexity, more sophisticated algorithms like quick sort and merge sort achieve O(n log n) average-case complexity, making them significantly more efficient for large datasets.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Linked Lists and Dynamic Data Management

Linked lists provide a flexible way to store and manage sequential data dynamically. Unlike arrays, linked lists don't require contiguous memory allocation, as each node contains both data and a reference (pointer) to the next node in the sequence. This structure allows for efficient insertion and deletion operations at any position in the list.

Traversing a linked list involves following the chain of node references from the head node until reaching the end. When adding new elements, you can either insert at the beginning (constant time operation) or at a specific position based on certain ordering criteria. Deletion requires updating the references of adjacent nodes to maintain list continuity.

The dynamic nature of linked lists makes them particularly useful in scenarios where the size of the data structure needs to change frequently. However, they trade off direct access capability for this flexibility, as accessing elements requires traversing from the beginning of the list.

Example: In a linked list representing a playlist, each song (node) contains the song data and a pointer to the next song. Adding or removing songs simply involves updating these pointers, without needing to shift other elements.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Dijkstra's Algorithm and Graph Theory Applications

Dijkstra's algorithm solves the shortest path problem in weighted graphs by systematically exploring paths from a starting node to all other nodes. The algorithm maintains two sets of nodes: visited and unvisited, along with a table tracking the current shortest known distance to each node and the previous node in that path.

The algorithm iteratively selects the unvisited node with the smallest tentative distance, marks it as visited, and updates the distances to its neighboring nodes if a shorter path is found through the current node. This process continues until all nodes have been visited or the destination node is reached.

This powerful algorithm finds applications in various real-world scenarios, from GPS navigation systems to network routing protocols. Its effectiveness lies in guaranteeing the optimal solution for graphs with non-negative edge weights.

Vocabulary: Edge weights represent the cost or distance between connected nodes in a weighted graph, which Dijkstra's algorithm uses to determine the optimal path.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Understanding Graph Traversal and Shortest Path Algorithms

Graph traversal is a fundamental concept in computer science that involves systematically exploring nodes and edges within a graph structure. When working with weighted graphs, finding the shortest path between nodes becomes a critical operation that has numerous real-world applications.

The process of finding the shortest path begins with initializing two essential lists: a visited list and an unvisited list. The visited list keeps track of nodes we've fully explored, while the unvisited list contains nodes we've discovered but haven't fully processed. Each node maintains three crucial pieces of information: its cost from the start node, its previous node in the path, and its current status.

Definition: A visited list in graph traversal algorithms contains nodes that have been fully explored, meaning we've examined all possible paths through that node.

As we traverse through the graph, we continuously update the costs and previous nodes. Starting from node A with a cost of 0, we explore its neighbors and calculate their costs. When we visit node B, its cost becomes 8 (through path A→B), and when we reach node C, its cost is 5 (through path A→C). This systematic exploration ensures we find the most efficient path to each node.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

View

Implementing Dijkstra's Algorithm in Practice

The implementation of Dijkstra's algorithm demonstrates how we can efficiently find shortest paths in weighted graphs. The algorithm maintains a running list of costs and continuously updates them as better paths are discovered.

In our example, we see the progression through steps 5 and 6, where nodes D and E are processed. Node D is added to the visited list with a cost of 9, reached through node B. Subsequently, node E is processed with a final cost of 11, reached through node D. This shows how the algorithm builds the optimal path incrementally.

Example: Consider a network of cities connected by roads of varying distances. Dijkstra's algorithm would help find the shortest route between any two cities, just as our example shows with nodes A through E.

The practical applications of this algorithm extend beyond simple graph problems. It's used in GPS navigation systems, network routing protocols, and social network analysis. The algorithm's efficiency makes it particularly valuable in real-world scenarios where finding optimal paths quickly is crucial.

Highlight: The key to understanding shortest path algorithms is recognizing how they systematically build optimal paths by maintaining and updating cost information for each node while exploring the graph structure.

Can't find what you're looking for? Explore other subjects.

Knowunity is the # 1 ranked education app in five European countries

Knowunity was a featured story by Apple and has consistently topped the app store charts within the education category in Germany, Italy, Poland, Switzerland and United Kingdom. Join Knowunity today and help millions of students around the world.

Ranked #1 Education App

Download in

Google Play

Download in

App Store

Knowunity is the # 1 ranked education app in five European countries

4.9+

Average App Rating

15 M

Students use Knowunity

#1

In Education App Charts in 12 Countries

950 K+

Students uploaded study notes

Still not sure? Look at what your fellow peers are saying...

iOS User

I love this app so much [...] I recommend Knowunity to everyone!!! I went from a C to an A with it :D

Stefan S, iOS User

The application is very simple and well designed. So far I have found what I was looking for :D

SuSSan, iOS User

Love this App ❤️, I use it basically all the time whenever I'm studying

Sign up to see the content. It's free!

Access to all documents

Improve your grades

Join milions of students

By signing up you accept Terms of Service and Privacy Policy

Cool Stuff About Big O Notation and Data Structures

Computer science fundamentals help us understand how to solve problems efficiently using different tools and techniques.

Big O notation for algorithm analysis is a way to measure how fast a program runs as the input size grows. Think of it like measuring how long it takes to find a book in different sizes of libraries. In a small library with 10 books, searching might be quick. But in a huge library with millions of books, some search methods will be much slower than others. Big O helps us compare these methods mathematically and choose the best one.

Applications of stacks and queues in computer science are essential data structures that organize information in specific ways. A stack works like a pile of plates - you can only add or remove from the top (Last-In-First-Out). This is useful in programs that need to track things like browser history or undo operations. Queues work like a line at a store - first person in line gets served first (First-In-First-Out). This helps manage tasks in order, like printing documents or handling customer service requests. Understanding binary search trees in data structures gives us a way to organize data so we can find things quickly. Imagine a tree where each branch point has a value, with smaller values going to the left and larger values to the right. This organization lets us find any value much faster than checking every item one by one. For example, to find a number between 1 and 1000, we can start at 500, then either go left if our number is smaller or right if it's larger, cutting the search space in half each time.

These concepts work together to help programmers create efficient solutions. When we understand how fast our programs run (Big O), how to organize data (stacks and queues), and how to search through information quickly (binary search trees), we can build better software that solves real-world problems effectively. This knowledge is crucial for creating everything from simple mobile apps to complex systems that handle millions of users.

1/23/2023

392

 

12/13

 

Computer Science

19

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Understanding Algorithm Complexity and Data Structures

Big O notation for algorithm analysis serves as the foundation for measuring algorithm efficiency. When analyzing algorithms, we must consider both time and space complexity to make informed decisions about implementation choices. This notation helps developers predict how an algorithm's performance will scale with increasing input sizes.

Time complexity classifications range from constant time O(1) to factorial time O(n!), with several important categories in between. Logarithmic time O(log n) algorithms, like binary search, demonstrate excellent scaling properties by repeatedly dividing the problem space in half. Linear time O(n) algorithms process each input element exactly once, while polynomial time O(n²) algorithms, common in nested loops, show quadratic growth.

Definition: Big O notation describes the upper bound of an algorithm's growth rate, helping us understand how it will perform with large inputs.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Search and Sort Algorithm Analysis

Searching algorithms demonstrate varying levels of efficiency based on their approach. Linear search examines each element sequentially with O(n) complexity, while binary search achieves O(log n) by repeatedly dividing the search space. Applications of stacks and queues in computer science enhance these operations through organized data management.

Sorting algorithms showcase different efficiency trade-offs. Bubble sort and insertion sort, while simple to implement, have O(n²) worst-case complexity. More sophisticated algorithms like merge sort maintain O(n log n) complexity even in worst-case scenarios, though they require additional space complexity considerations.

Example: Binary search requires a sorted array and repeatedly divides the search space in half, checking if the target value is in the lower or upper portion.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Data Structure Implementation and Usage

Stacks implement Last-In-First-Out (LIFO) behavior through push and pop operations, making them ideal for managing program execution and parsing expressions. Their applications include checking balanced parentheses and managing recursive function calls. Queue implementations support First-In-First-Out (FIFO) operations through enqueue and dequeue methods.

Linear queues face space utilization challenges as elements are removed, while circular queues optimize array usage by wrapping around to reuse empty positions. These structures find practical applications in print spooling, task scheduling, and simulation systems.

Highlight: Circular queues improve efficiency by reusing space, making them particularly valuable in memory-constrained environments.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Tree Data Structures and Traversal

Understanding binary search trees in data structures requires grasping fundamental tree properties. Trees represent hierarchical relationships through connected, acyclic graphs. Binary trees restrict each node to at most two children, enabling efficient searching and sorting operations.

Tree traversal algorithms provide different ways to visit all nodes systematically. Pre-order, in-order, and post-order traversals serve different purposes in data processing and analysis. Binary search trees maintain ordered relationships between nodes, supporting O(log n) search operations when balanced.

Vocabulary: A binary search tree maintains the property that all left child values are less than their parent, and all right child values are greater.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Understanding Tree Data Structures and Traversal Methods

A binary search tree represents a specialized hierarchical data structure that maintains elements in a sorted order, making search operations highly efficient. Unlike regular binary trees, BSTs follow strict ordering rules where the left subtree contains values smaller than the root node, while the right subtree holds larger values.

Trees serve multiple critical purposes in computer science applications. Compilers extensively use syntax trees to parse and process programming language code. The ordered nature of binary search trees in data structures enables faster searching compared to linear data structures. Additionally, trees provide an elegant way to represent and evaluate Boolean expressions by organizing operators and operands hierarchically.

Tree traversal algorithms determine how we systematically visit each node. Breadth-first traversal processes nodes level by level, starting from the root and moving horizontally before going deeper. This approach is particularly useful when you need to process nodes in order of their distance from the root. In contrast, depth-first traversal explores one complete branch before backtracking to process other branches, making it memory-efficient for deep trees.

Definition: A binary search tree is a binary tree where for each node, all elements in the left subtree are smaller than the node's value, and all elements in the right subtree are larger.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Advanced Sorting Algorithms and Their Applications

Sorting algorithms play a fundamental role in organizing data efficiently. The bubble sort algorithm, while simple to implement, repeatedly compares adjacent elements and swaps them if they're in the wrong order. Though inefficient for large datasets, bubble sort can be suitable for small lists or educational purposes.

Insertion sort builds a sorted portion of the array incrementally by taking elements from the unsorted portion and inserting them in their correct positions. This algorithm performs well on nearly sorted arrays and small datasets. The merge sort algorithm employs a divide-and-conquer strategy by recursively splitting the array into smaller subarrays, sorting them, and then merging them back together.

Quick sort, another efficient divide-and-conquer algorithm, works by selecting a pivot element and partitioning the array around it. Elements smaller than the pivot go to one side, while larger elements go to the other. This process continues recursively until the entire array is sorted. Quick sort generally offers better performance than bubble and insertion sort for large datasets.

Highlight: While bubble sort has O(n²) complexity, more sophisticated algorithms like quick sort and merge sort achieve O(n log n) average-case complexity, making them significantly more efficient for large datasets.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Linked Lists and Dynamic Data Management

Linked lists provide a flexible way to store and manage sequential data dynamically. Unlike arrays, linked lists don't require contiguous memory allocation, as each node contains both data and a reference (pointer) to the next node in the sequence. This structure allows for efficient insertion and deletion operations at any position in the list.

Traversing a linked list involves following the chain of node references from the head node until reaching the end. When adding new elements, you can either insert at the beginning (constant time operation) or at a specific position based on certain ordering criteria. Deletion requires updating the references of adjacent nodes to maintain list continuity.

The dynamic nature of linked lists makes them particularly useful in scenarios where the size of the data structure needs to change frequently. However, they trade off direct access capability for this flexibility, as accessing elements requires traversing from the beginning of the list.

Example: In a linked list representing a playlist, each song (node) contains the song data and a pointer to the next song. Adding or removing songs simply involves updating these pointers, without needing to shift other elements.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Dijkstra's Algorithm and Graph Theory Applications

Dijkstra's algorithm solves the shortest path problem in weighted graphs by systematically exploring paths from a starting node to all other nodes. The algorithm maintains two sets of nodes: visited and unvisited, along with a table tracking the current shortest known distance to each node and the previous node in that path.

The algorithm iteratively selects the unvisited node with the smallest tentative distance, marks it as visited, and updates the distances to its neighboring nodes if a shorter path is found through the current node. This process continues until all nodes have been visited or the destination node is reached.

This powerful algorithm finds applications in various real-world scenarios, from GPS navigation systems to network routing protocols. Its effectiveness lies in guaranteeing the optimal solution for graphs with non-negative edge weights.

Vocabulary: Edge weights represent the cost or distance between connected nodes in a weighted graph, which Dijkstra's algorithm uses to determine the optimal path.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Understanding Graph Traversal and Shortest Path Algorithms

Graph traversal is a fundamental concept in computer science that involves systematically exploring nodes and edges within a graph structure. When working with weighted graphs, finding the shortest path between nodes becomes a critical operation that has numerous real-world applications.

The process of finding the shortest path begins with initializing two essential lists: a visited list and an unvisited list. The visited list keeps track of nodes we've fully explored, while the unvisited list contains nodes we've discovered but haven't fully processed. Each node maintains three crucial pieces of information: its cost from the start node, its previous node in the path, and its current status.

Definition: A visited list in graph traversal algorithms contains nodes that have been fully explored, meaning we've examined all possible paths through that node.

As we traverse through the graph, we continuously update the costs and previous nodes. Starting from node A with a cost of 0, we explore its neighbors and calculate their costs. When we visit node B, its cost becomes 8 (through path A→B), and when we reach node C, its cost is 5 (through path A→C). This systematic exploration ensures we find the most efficient path to each node.

3
Component 2.3 revision notes
Big O notation
Component 2.3 revision notes
1 O(1)
O(log n) (that is the binary logarithm
O(log₂ n))
O(n)
O(n

Implementing Dijkstra's Algorithm in Practice

The implementation of Dijkstra's algorithm demonstrates how we can efficiently find shortest paths in weighted graphs. The algorithm maintains a running list of costs and continuously updates them as better paths are discovered.

In our example, we see the progression through steps 5 and 6, where nodes D and E are processed. Node D is added to the visited list with a cost of 9, reached through node B. Subsequently, node E is processed with a final cost of 11, reached through node D. This shows how the algorithm builds the optimal path incrementally.

Example: Consider a network of cities connected by roads of varying distances. Dijkstra's algorithm would help find the shortest route between any two cities, just as our example shows with nodes A through E.

The practical applications of this algorithm extend beyond simple graph problems. It's used in GPS navigation systems, network routing protocols, and social network analysis. The algorithm's efficiency makes it particularly valuable in real-world scenarios where finding optimal paths quickly is crucial.

Highlight: The key to understanding shortest path algorithms is recognizing how they systematically build optimal paths by maintaining and updating cost information for each node while exploring the graph structure.

Can't find what you're looking for? Explore other subjects.

Knowunity is the # 1 ranked education app in five European countries

Knowunity was a featured story by Apple and has consistently topped the app store charts within the education category in Germany, Italy, Poland, Switzerland and United Kingdom. Join Knowunity today and help millions of students around the world.

Ranked #1 Education App

Download in

Google Play

Download in

App Store

Knowunity is the # 1 ranked education app in five European countries

4.9+

Average App Rating

15 M

Students use Knowunity

#1

In Education App Charts in 12 Countries

950 K+

Students uploaded study notes

Still not sure? Look at what your fellow peers are saying...

iOS User

I love this app so much [...] I recommend Knowunity to everyone!!! I went from a C to an A with it :D

Stefan S, iOS User

The application is very simple and well designed. So far I have found what I was looking for :D

SuSSan, iOS User

Love this App ❤️, I use it basically all the time whenever I'm studying