Skip to content

Commit 596156e

Browse files
Edits to Complexity sprint 1 (#1428)
* Edits to Complexity sprint 1 * Update common-content/en/module/complexity/big-o/index.md Co-authored-by: Sally McGrath <sally@codeyourfuture.io> * Update common-content/en/module/complexity/memory-consumption/index.md Co-authored-by: Sally McGrath <sally@codeyourfuture.io> * Restate duplicate encoder problem * Update common-content/en/module/complexity/big-o/index.md --------- Co-authored-by: Sally McGrath <sally@codeyourfuture.io>
1 parent 12fb160 commit 596156e

13 files changed

Lines changed: 190 additions & 42 deletions

File tree

common-content/en/module/complexity/big-o/index.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Big-O"
3+
time = 30
4+
emoji = "📈"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 30
8-
emoji= "📈"
99
[objectives]
1010
1="Categorise algorithms as O(lg(n)), O(n), O(n^2), O(2^n)"
1111
+++
@@ -29,13 +29,13 @@ line [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
2929

3030
Complete the coursework [Data Structures and Algorithms: Space and Time Complexity](https://www.wscubetech.com/resources/dsa/time-complexity).
3131

32-
This is in your backlog and you do not need to do it now, but you might like to open it in a tab.
32+
This is in your backlog. You do not need to do it right now, but it might help to. If not, you might like to open it in a tab.
3333

3434
{{</note>}}
3535

3636
<details><summary>
3737

38-
☺️ **Constant:** The algorithm takes the same amount of time, regardless of the input size.
38+
😀 **Constant:** The algorithm takes the same amount of time, regardless of the input size.
3939

4040
</summary>
4141

@@ -48,6 +48,8 @@ y-axis "Computation Time" 0 --> 10
4848
line [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
4949
```
5050

51+
An example is getting the first character of a string. No matter how long the string is, we know where the first character is, and we can get it.
52+
5153
</details>
5254

5355
<details>
@@ -62,8 +64,9 @@ line [2.3, 3.0, 3.4, 3.7, 3.9, 4.1, 4.2, 4.4, 4.5, 4.6]
6264

6365
<summary>
6466

65-
😐 **Logarithmic:** The runtime grows proportionally to the [logarithm](https://www.bbc.co.uk/bitesize/guides/zn3ty9q/revision/1) of the input size.</summary>
67+
☺️ **Logarithmic:** The runtime grows proportionally to the [logarithm](https://www.bbc.co.uk/bitesize/guides/zn3ty9q/revision/1) of the input size.</summary>
6668

69+
An example is finding a string in a sorted list. Each time we can look in the middle of the list, and halve the number of entries we need to consider next time by looking either in the half before or the half after that element.
6770
</details>
6871

6972
<details>
@@ -78,7 +81,11 @@ line [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
7881

7982
<summary>
8083

81-
😨 **Linear:** The runtime grows proportionally to the input size.</summary>
84+
🙂 **Linear:** The runtime grows proportionally to the input size.</summary>
85+
86+
An example is finding an element by value in an un-sorted list. To be sure we find the element, we may need to look through every element in the list and check if it's the one we're looking for.
87+
88+
If we double the length of the list, we need to check twice as many elements.
8289

8390
</details>
8491

@@ -108,9 +115,11 @@ line [100, 400, 900, 1600, 2500, 3600, 4900, 6400, 8100, 10000]
108115

109116
What does this mean? It means that the time is the square of the input size: n\*n.
110117

118+
An example is finding which elements in an array are present more than once. For each element, we need to check every other element in the same array to see if they're equal. If we double the number of elements in the array, we _quadruple_ the number of checks we need to do.
119+
111120
<summary>
112121

113-
😰 **Quadratic:** The runtime grows proportionally to the square of the input size.</summary>
122+
😨 **Quadratic:** The runtime grows proportionally to the square of the input size.</summary>
114123

115124
</details>
116125

@@ -144,6 +153,8 @@ Oh where have we seen this sequence of numbers before? ;)
144153

145154
😱 **Exponential:** The runtime grows exponentially with the input size.</summary>
146155

156+
An example is making a list of every _combination_ of ever element in a list (so if we have `[1, 2, 3]` and want to make all the combinations: `[]`, `[1]`, `[2]`, `[3]`, `[1, 2]`, `[1, 3]`, `[2, 3]`, `[1, 2, 3]`).
157+
147158
</details>
148159

149160
You will explore this theory in your backlog, but you will find that you already have a basic understanding of this idea. No really! Let's look at these algorithms in real life:
@@ -163,8 +174,12 @@ You will explore this theory in your backlog, but you will find that you already
163174
[LABEL=Quadratic Time]
164175
- Everyone at a party shaking hands with everyone else. If you double the number of people (n), the number of handshakes increases much faster (roughly n \* n). This is like nesting a loop inside a loop.
165176
[LABEL=Exponential Time]
166-
- Trying every possible combination to unlock a password. Each extra character dramatically increases the possibilities. This is like naive recursion; we'll talk about this more later.
177+
- Trying every possible combination to unlock a password. Each extra character dramatically increases the possibilities.
167178
{{< /label-items >}}
168179

169180
> [!TIP]
170181
> Big-O notation also describes space complexity (how memory use grows). Sometimes an algorithm's time complexity is different from its space complexity. We have focused on time here, but you'll meet space complexity analysis in the assigned reading.
182+
183+
Big-O notation is focused on the _trend_ of growth, not the exact growth.
184+
185+
Think about strings: one character may take up one byte or four. If we double the length of the string, we don't check which characters are in the string. We just think about the **trend**. The string will take _about_ twice as much space. If the string only has four-byte characters, and we add one-byte characters, the string is _still_ growing linearly, even though it may not take _exactly_ double the space.

common-content/en/module/complexity/caching/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Caching"
3+
time = 15
4+
emoji = "🛍️"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 15
8-
emoji= "🛍️"
99
[objectives]
1010
1="Identify and explain how web browsers benefit from caching"
1111
2="Demonstrate how caching can trade memory for CPU"

common-content/en/module/complexity/invalidation/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Cache Invalidation"
3+
time = 15
4+
emoji = "⛓️‍💥"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 15
8-
emoji= "⛓️‍💥"
99
[objectives]
1010
1="Identify and explain staleness risks with caching, and the difficulty of invalidation"
1111
+++

common-content/en/module/complexity/memoisation/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Memoisation"
3+
time = 15
4+
emoji = "📝"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 15
8-
emoji= "📝"
99
[objectives]
1010
1="Define memoisation"
1111
+++

common-content/en/module/complexity/memory-consumption/index.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
+++
22
title = "Memory consumption"
33
description="Memory is finite"
4+
time = 30
5+
emoji = "🥪"
46
[build]
57
render = 'never'
68
list = 'local'
79
publishResources = false
8-
time = 30
9-
emoji= "🥪"
1010
[objectives]
1111
1="Quantify the memory used by different arrays"
1212
+++
@@ -19,7 +19,7 @@ Think back to Chapter 7 of <cite>How Your Computer Really Works</cite>.
1919

2020
```mermaid
2121
graph LR
22-
CPU -->|️ Fastest: Smallest| Cache -->|Fast: Small| RAM -->|Slow : Big| Disk -->|Slowest: Vast| Network
22+
CPU -->|️ Fastest and Smallest| Cache -->|Fast and Small| RAM -->|Slow and Big| Disk -->|Slowest and Vast| Network
2323
```
2424

2525
At each stage there are **limits** to **how fast** you can get the data and **how much** data you can store. Given this constraint, we need to consider how much memory our programs consume.
@@ -34,11 +34,14 @@ const userRoles = ["Admin", "Editor", "Viewer"]; //An array of 3 short strings
3434
const userProfiles = [ {id: 1, name: "Farzaneh", role: "Admin", preferences: {...}}, {id: 2, name: "Cuneyt", role: "Editor", preferences: {...}} ]; // An array of 2 complex objects
3535
```
3636

37-
Different kinds of data have different memory footprints:
37+
Different kinds of data have different memory footprints. All data is fundamentally stored as bytes. We can form intuition for how much memory a piece of data takes:
3838

39-
- Numbers or booleans use less memory than objects
39+
- Numbers are typically stored as 8 bytes. In some languages, you can define numbers which take up less space (but can store a smaller range of values).
40+
- Each character in an ASCII string takes 1 byte. More complex characters may take more bytes. The biggest characters take up to 4 bytes.
4041
- The longer a string, the more bytes it consumes.
41-
- Objects and arrays need memory for their internal organisation _as well_ as the data itself.
42+
- Objects and arrays are stored in different ways in different languages. But they need to store _at least_ the information contained within them.
43+
- This means an array of 5 elements will use _at least_ as much memory as the 5 elements would on their own.
44+
- And objects will use _at least_ as much memory as all of the _values_ inside the object (and in some languages, all of the keys as well).
4245

4346
More complicated elements or more properties need more memory. It matters what things are made of. All of this data added up is how much _space_ our program takes.
4447

common-content/en/module/complexity/n+1/index.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "N+1 Query Problem"
3+
time = 60
4+
emoji = "🎟️"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 60
8-
emoji= "🎟️"
99
[objectives]
1010
1="Define the n+1 query problem"
1111
2="List effective strategies to reduce database queries"
@@ -45,7 +45,14 @@ We've already seen that every query adds network delay and processing time. This
4545

4646
The server has to handle each request individually, consuming resources (CPU, memory, connections). If many users trigger this N+1 pattern at once, the database can slow down for everyone, or even fall over entirely.
4747

48-
This N+1 problem can happen with any database interaction if you loop and query individually. Understanding this helps you write backend code that doesn't accidentally overload the database.
48+
This N+1 problem can happen with any database interaction if you loop and query individually. Understanding this helps you write code that doesn't accidentally overload the database.
49+
50+
{{<
51+
multiple-choice
52+
question="What is the N+1 Query Problem?"
53+
answers="Fetching N items plus 1 extra backup item. | Making 1 query to get a list, then N separate queries to get details for each item in the list. | A query that is N times too complex. | Trying N+1 different network endpoints."
54+
feedback="No, but flip this and try again? | Right! That's a clear description. | No, this is so vague it describes nothing. | No, it's not about network endpoints."
55+
correct="1">}}
4956

5057
### 📦 What to do instead
5158

@@ -55,11 +62,8 @@ The real `/home `endpoint avoids these problems by using efficient strategies:
5562
**Caching**: Store results so you don't have to ask the network again. Ask for only new changes in future.
5663
**Pagination**: Ask for only the first page of results. Load more later if the user scrolls or clicks "next".
5764

58-
All these are ways to save the data we need, close to where we need it. But each strategy also has downsides.
65+
All these are ways to save the data we need, close to where we need it. But each strategy also has downsides.
5966

60-
{{<
61-
multiple-choice
62-
question="What is the N+1 Query Problem?"
63-
answers="Fetching N items plus 1 extra backup item. | Making 1 query to get a list, then N separate queries to get details for each item in the list. | A query that is N times too complex. | Trying N+1 different network endpoints."
64-
feedback="No, but flip this and try again? | Right! That's a clear description. | No, this is so vague it describes nothing. | No, it's not about network endpoints."
65-
correct="1">}}
67+
* Batching may reduce our responsiveness. It will probably take longer to fetch three users' blooms than one user's blooms. If we'd just asked for one user's blooms, then the next user's, we probably probably could've showed the user _some_ results sooner. Batching forces us to wait for _all_ of the results before we show anything.
68+
* Caching may result in stale results. If we store a user's most recent blooms, and when they re-visit the page we don't ask the database for the most recent blooms, it's possible we'll return old results missing the newest blooms.
69+
* Pagination means the user doesn't have complete results up-front. If they want to ask a question like "has this user ever bloomed the word cheese", they may need to keep scrolling and searching to find the answer (with each scroll requiring a separate network call and database lookup).

common-content/en/module/complexity/network-as-a-bottleneck/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Network as a bottleneck"
3+
time = 15
4+
emoji = ""
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 15
8-
emoji= ""
99
[objectives]
1010
1="Explain limitations of needing to make network calls (e.g. from a backend to a database)"
1111
+++

common-content/en/module/complexity/operations/index.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = '"Expensive" Operations'
3+
time = 30
4+
emoji= "🧮"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 30
8-
emoji= "🧮"
99
[objectives]
1010
1="Explain what the significant/expensive operations for a particular algorithm are likely to be"
1111
2="Quantify the number of significant operations taken by a particular algorithm"
@@ -15,7 +15,7 @@ Let's think about Purple Forest from the [Legacy Code](https://github.com/CodeYo
1515

1616
When we build the timeline of blooms on the homepage, we call an endpoint `/home`. This returns an array of objects, blooms, produced by people we follow, plus our own blooms, sorted by timestamp. We stuff this in our state object (and cache _that_ in our local storage).
1717

18-
There are many different ways we could get and show this information. Some ways are {{<tooltip title="better">}}
18+
There are many different ways we could get and show this information in our frontend. Some ways are {{<tooltip title="better">}}
1919
Here we are defining better as faster. We might at other times define better as _simpler_, _clearer_, or _safer_.
2020
{{</tooltip>}} than others.
2121

@@ -36,7 +36,7 @@ What if we had tried any of the following strategies:
3636
1. Request our own blooms
3737
1. Merge all the arrays
3838
1. Sort by timestamp
39-
1. Display blooms!
39+
1. Display blooms
4040

4141
#### 3. Get ALL Blooms & People, then Loop & Filter
4242

@@ -47,7 +47,7 @@ What if we had tried any of the following strategies:
4747
1. Sort by timestamp
4848
1. Display blooms
4949

50-
Given what we've just thought about, how efficient are these programs? How could you make them more efficient? Write your ideas down in your notebook.
50+
Given what we've just thought about, how efficient are these programs? Which is going to be fastest or slowest? Which is going to use the most or least memory? How could you make them more efficient? Write your ideas down in your notebook.
5151

5252
Our end state is always to show the latest blooms that meet our criteria. How we produce that list determines how quickly our user gets their page. This is very very important. After just **three seconds**, half of all your users have given up and left.
5353

@@ -58,3 +58,5 @@ The Purple Forest application does not do most of this work on the front end, bu
5858
1. Number of network calls
5959

6060
This is because some operations are more {{<tooltip title="expensive">}}Expensive operations consume a lot of computational resources like CPU time, memory, or disk I/O.{{</tooltip>}} than others.
61+
62+
The _order_ also matters. In all of the above strategies, we filter the blooms _before_ sorting them. Sorting isn't a constant-time operation, so it takes more time to sort more data. If in the first strategy we had sorted _all_ of the blooms before we filtered down to the just the ones we cared about, we would have spent a lot more time sorting blooms we don't care about.

common-content/en/module/complexity/pre-computing/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Pre-computing"
3+
time = 30
4+
emoji = "🔮"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 30
8-
emoji= "🔮"
99
[objectives]
1010
1="Identify a pre-computation which will improve the complexity of an algorithm"
1111
+++

common-content/en/module/complexity/trade-offs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
+++
22
title = "Trade-offs"
3+
time = 15
4+
emoji = "⚖️"
35
[build]
46
render = 'never'
57
list = 'local'
68
publishResources = false
7-
time = 15
8-
emoji= "⚖️"
99
[objectives]
1010
1="Give examples of trading off memory for CPU"
1111
2="Give examples of choosing where work is done in system design"

0 commit comments

Comments
 (0)