Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 23 additions & 8 deletions common-content/en/module/complexity/big-o/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Big-O"
time = 30
emoji = "📈"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 30
emoji= "📈"
[objectives]
1="Categorise algorithms as O(lg(n)), O(n), O(n^2), O(2^n)"
+++
Expand All @@ -29,13 +29,13 @@ line [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]

Complete the coursework [Data Structures and Algorithms: Space and Time Complexity](https://www.wscubetech.com/resources/dsa/time-complexity).

This is in your backlog and you do not need to do it now, but you might like to open it in a tab.
This is in your backlog. You do not need to do it right now, but it might help to. If not, you might like to open it in a tab.

{{</note>}}

<details><summary>

☺️ **Constant:** The algorithm takes the same amount of time, regardless of the input size.
😀 **Constant:** The algorithm takes the same amount of time, regardless of the input size.

</summary>

Expand All @@ -48,6 +48,8 @@ y-axis "Computation Time" 0 --> 10
line [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```

An example is getting the first character of a string. No matter how long the string is, we know where the first character is, and we can get it.

</details>

<details>
Expand All @@ -62,8 +64,9 @@ line [2.3, 3.0, 3.4, 3.7, 3.9, 4.1, 4.2, 4.4, 4.5, 4.6]

<summary>

😐 **Logarithmic:** The runtime grows proportionally to the [logarithm](https://www.bbc.co.uk/bitesize/guides/zn3ty9q/revision/1) of the input size.</summary>
☺️ **Logarithmic:** The runtime grows proportionally to the [logarithm](https://www.bbc.co.uk/bitesize/guides/zn3ty9q/revision/1) of the input size.</summary>

An example is finding a string in a sorted list. Each time we can look in the middle of the list, and halve the number of entries we need to consider next time by looking either in the half before or the half after that element.
</details>

<details>
Expand All @@ -78,7 +81,11 @@ line [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

<summary>

😨 **Linear:** The runtime grows proportionally to the input size.</summary>
🙂 **Linear:** The runtime grows proportionally to the input size.</summary>

An example is finding an element by value in an un-sorted list. To be sure we find the element, we may need to look through every element in the list and check if it's the one we're looking for.

If we double the length of the list, we need to check twice as many elements.

</details>

Expand Down Expand Up @@ -108,9 +115,11 @@ line [100, 400, 900, 1600, 2500, 3600, 4900, 6400, 8100, 10000]

What does this mean? It means that the time is the square of the input size: n\*n.

An example is finding which elements in an array are present more than once. For each element, we need to check every other element in the same array to see if they're equal. If we double the number of elements in the array, we _quadruple_ the number of checks we need to do.
Copy link
Copy Markdown
Member

@SallyMcGrath SallyMcGrath May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH I had intended these just as a visual, that's why I rm the examples - but if you want them, I'm ok with this


<summary>

😰 **Quadratic:** The runtime grows proportionally to the square of the input size.</summary>
😨 **Quadratic:** The runtime grows proportionally to the square of the input size.</summary>

</details>

Expand Down Expand Up @@ -144,6 +153,8 @@ Oh where have we seen this sequence of numbers before? ;)

😱 **Exponential:** The runtime grows exponentially with the input size.</summary>

An example is making a list of every _combination_ of ever element in a list (so if we have `[1, 2, 3]` and want to make all the combinations: `[]`, `[1]`, `[2]`, `[3]`, `[1, 2]`, `[1, 3]`, `[2, 3]`, `[1, 2, 3]`).

</details>

You will explore this theory in your backlog, but you will find that you already have a basic understanding of this idea. No really! Let's look at these algorithms in real life:
Expand All @@ -163,8 +174,12 @@ You will explore this theory in your backlog, but you will find that you already
[LABEL=Quadratic Time]
- Everyone at a party shaking hands with everyone else. If you double the number of people (n), the number of handshakes increases much faster (roughly n \* n). This is like nesting a loop inside a loop.
[LABEL=Exponential Time]
- Trying every possible combination to unlock a password. Each extra character dramatically increases the possibilities. This is like naive recursion; we'll talk about this more later.
- Trying every possible combination to unlock a password. Each extra character dramatically increases the possibilities.
{{< /label-items >}}

> [!TIP]
> Big-O notation also describes space complexity (how memory use grows). Sometimes an algorithm's time complexity is different from its space complexity. We have focused on time here, but you'll meet space complexity analysis in the assigned reading.

Big-O notation is focused on the _trend_ of growth, not the exact growth.

Think about strings: one character may take up one byte or four. If we double the length of the string, we don't check which characters are in the string. We just think about the **trend**. The string will take _about_ twice as much space. If the string only has four-byte characters, and we add one-byte characters, the string is _still_ growing linearly, even though it may not take _exactly_ double the space.
4 changes: 2 additions & 2 deletions common-content/en/module/complexity/caching/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Caching"
time = 15
Comment thread
illicitonion marked this conversation as resolved.
emoji = "🛍️"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 15
emoji= "🛍️"
[objectives]
1="Identify and explain how web browsers benefit from caching"
2="Demonstrate how caching can trade memory for CPU"
Expand Down
4 changes: 2 additions & 2 deletions common-content/en/module/complexity/invalidation/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Cache Invalidation"
time = 15
emoji = "⛓️‍💥"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 15
emoji= "⛓️‍💥"
[objectives]
1="Identify and explain staleness risks with caching, and the difficulty of invalidation"
+++
Expand Down
4 changes: 2 additions & 2 deletions common-content/en/module/complexity/memoisation/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Memoisation"
time = 15
emoji = "📝"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 15
emoji= "📝"
[objectives]
1="Define memoisation"
+++
Expand Down
15 changes: 9 additions & 6 deletions common-content/en/module/complexity/memory-consumption/index.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
+++
title = "Memory consumption"
description="Memory is finite"
time = 30
emoji = "🥪"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 30
emoji= "🥪"
[objectives]
1="Quantify the memory used by different arrays"
+++
Expand All @@ -19,7 +19,7 @@ Think back to Chapter 7 of <cite>How Your Computer Really Works</cite>.

```mermaid
graph LR
CPU -->|️ Fastest: Smallest| Cache -->|Fast: Small| RAM -->|Slow : Big| Disk -->|Slowest: Vast| Network
CPU -->|️ Fastest and Smallest| Cache -->|Fast and Small| RAM -->|Slow and Big| Disk -->|Slowest and Vast| Network
```

At each stage there are **limits** to **how fast** you can get the data and **how much** data you can store. Given this constraint, we need to consider how much memory our programs consume.
Expand All @@ -34,11 +34,14 @@ const userRoles = ["Admin", "Editor", "Viewer"]; //An array of 3 short strings
const userProfiles = [ {id: 1, name: "Farzaneh", role: "Admin", preferences: {...}}, {id: 2, name: "Cuneyt", role: "Editor", preferences: {...}} ]; // An array of 2 complex objects
```

Different kinds of data have different memory footprints:
Different kinds of data have different memory footprints. All data is fundamentally stored as bytes. We can form intuition for how much memory a piece of data takes:

- Numbers or booleans use less memory than objects
- Numbers are typically stored as 8 bytes. In some languages, you can define numbers which take up less space (but can store a smaller range of values).
- Each character in an ASCII string takes 1 byte. More complex characters may take more bytes. The biggest characters take up to 4 bytes.
- The longer a string, the more bytes it consumes.
- Objects and arrays need memory for their internal organisation _as well_ as the data itself.
- Objects and arrays are stored in different ways in different languages. But they need to store _at least_ the information contained within them.
- This means an array of 5 elements will use _at least_ as much memory as the 5 elements would on their own.
- And objects will use _at least_ as much memory as all of the _values_ inside the object (and in some languages, all of the keys as well).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, is this a bit too much info? We don't want to overload them.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe... Let's maybe have a chat about it?

I changed it because the previous content is kind of misleading - in most compiled programming languages, arrays and objects take exactly as much space as their contents/members/fields, so I wanted to avoid instilling an intuition that they have overhead where they don't necessarily...

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it's a good point. I was trying to steer away from the python size of debacle I've encountered before with trainees and have oversteered.


More complicated elements or more properties need more memory. It matters what things are made of. All of this data added up is how much _space_ our program takes.

Expand Down
24 changes: 14 additions & 10 deletions common-content/en/module/complexity/n+1/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "N+1 Query Problem"
time = 60
emoji = "🎟️"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 60
emoji= "🎟️"
[objectives]
1="Define the n+1 query problem"
2="List effective strategies to reduce database queries"
Expand Down Expand Up @@ -45,7 +45,14 @@ We've already seen that every query adds network delay and processing time. This

The server has to handle each request individually, consuming resources (CPU, memory, connections). If many users trigger this N+1 pattern at once, the database can slow down for everyone, or even fall over entirely.

This N+1 problem can happen with any database interaction if you loop and query individually. Understanding this helps you write backend code that doesn't accidentally overload the database.
This N+1 problem can happen with any database interaction if you loop and query individually. Understanding this helps you write code that doesn't accidentally overload the database.

{{<
multiple-choice
question="What is the N+1 Query Problem?"
answers="Fetching N items plus 1 extra backup item. | Making 1 query to get a list, then N separate queries to get details for each item in the list. | A query that is N times too complex. | Trying N+1 different network endpoints."
feedback="No, but flip this and try again? | Right! That's a clear description. | No, this is so vague it describes nothing. | No, it's not about network endpoints."
correct="1">}}

### 📦 What to do instead

Expand All @@ -55,11 +62,8 @@ The real `/home `endpoint avoids these problems by using efficient strategies:
**Caching**: Store results so you don't have to ask the network again. Ask for only new changes in future.
**Pagination**: Ask for only the first page of results. Load more later if the user scrolls or clicks "next".

All these are ways to save the data we need, close to where we need it. But each strategy also has downsides.
All these are ways to save the data we need, close to where we need it. But each strategy also has downsides.

{{<
multiple-choice
question="What is the N+1 Query Problem?"
answers="Fetching N items plus 1 extra backup item. | Making 1 query to get a list, then N separate queries to get details for each item in the list. | A query that is N times too complex. | Trying N+1 different network endpoints."
feedback="No, but flip this and try again? | Right! That's a clear description. | No, this is so vague it describes nothing. | No, it's not about network endpoints."
correct="1">}}
* Batching may reduce our responsiveness. It will probably take longer to fetch three users' blooms than one user's blooms. If we'd just asked for one user's blooms, then the next user's, we probably probably could've showed the user _some_ results sooner. Batching forces us to wait for _all_ of the results before we show anything.
* Caching may result in stale results. If we store a user's most recent blooms, and when they re-visit the page we don't ask the database for the most recent blooms, it's possible we'll return old results missing the newest blooms.
* Pagination means the user doesn't have complete results up-front. If they want to ask a question like "has this user ever bloomed the word cheese", they may need to keep scrolling and searching to find the answer (with each scroll requiring a separate network call and database lookup).
Comment thread
illicitonion marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Network as a bottleneck"
time = 15
emoji = "⏳"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 15
emoji= "⏳"
[objectives]
1="Explain limitations of needing to make network calls (e.g. from a backend to a database)"
+++
Expand Down
12 changes: 7 additions & 5 deletions common-content/en/module/complexity/operations/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = '"Expensive" Operations'
time = 30
emoji= "🧮"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 30
emoji= "🧮"
[objectives]
1="Explain what the significant/expensive operations for a particular algorithm are likely to be"
2="Quantify the number of significant operations taken by a particular algorithm"
Expand All @@ -15,7 +15,7 @@ Let's think about Purple Forest from the [Legacy Code](https://github.com/CodeYo

When we build the timeline of blooms on the homepage, we call an endpoint `/home`. This returns an array of objects, blooms, produced by people we follow, plus our own blooms, sorted by timestamp. We stuff this in our state object (and cache _that_ in our local storage).

There are many different ways we could get and show this information. Some ways are {{<tooltip title="better">}}
There are many different ways we could get and show this information in our frontend. Some ways are {{<tooltip title="better">}}
Here we are defining better as faster. We might at other times define better as _simpler_, _clearer_, or _safer_.
{{</tooltip>}} than others.

Expand All @@ -36,7 +36,7 @@ What if we had tried any of the following strategies:
1. Request our own blooms
1. Merge all the arrays
1. Sort by timestamp
1. Display blooms!
1. Display blooms

#### 3. Get ALL Blooms & People, then Loop & Filter

Expand All @@ -47,7 +47,7 @@ What if we had tried any of the following strategies:
1. Sort by timestamp
1. Display blooms

Given what we've just thought about, how efficient are these programs? How could you make them more efficient? Write your ideas down in your notebook.
Given what we've just thought about, how efficient are these programs? Which is going to be fastest or slowest? Which is going to use the most or least memory? How could you make them more efficient? Write your ideas down in your notebook.

Our end state is always to show the latest blooms that meet our criteria. How we produce that list determines how quickly our user gets their page. This is very very important. After just **three seconds**, half of all your users have given up and left.

Expand All @@ -58,3 +58,5 @@ The Purple Forest application does not do most of this work on the front end, bu
1. Number of network calls

This is because some operations are more {{<tooltip title="expensive">}}Expensive operations consume a lot of computational resources like CPU time, memory, or disk I/O.{{</tooltip>}} than others.

The _order_ also matters. In all of the above strategies, we filter the blooms _before_ sorting them. Sorting isn't a constant-time operation, so it takes more time to sort more data. If in the first strategy we had sorted _all_ of the blooms before we filtered down to the just the ones we cared about, we would have spent a lot more time sorting blooms we don't care about.
4 changes: 2 additions & 2 deletions common-content/en/module/complexity/pre-computing/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Pre-computing"
time = 30
emoji = "🔮"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 30
emoji= "🔮"
[objectives]
1="Identify a pre-computation which will improve the complexity of an algorithm"
+++
Expand Down
4 changes: 2 additions & 2 deletions common-content/en/module/complexity/trade-offs/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
+++
title = "Trade-offs"
time = 15
emoji = "⚖️"
[build]
render = 'never'
list = 'local'
publishResources = false
time = 15
emoji= "⚖️"
[objectives]
1="Give examples of trading off memory for CPU"
2="Give examples of choosing where work is done in system design"
Expand Down
Loading
Loading