Problem 1
There are a couple of ways to approach this: Picking c first and then figuring out n0, or picking n0 first and then figuring out c.
Picking c first
6n2 + 16 <= cn2 ∀ n >= n0
let c = 7 6n2 + 16 <= 7n2 ∀ n >= n0
16 <= n2 ∀ n >= n0
4 <= n ∀ n >= n0
let n0 = 4 4 <= n ∀ n >= 4
The last statement is trivially true, so it's proven.
Picking n0 first
6n2 + 16 <= cn2 ∀ n >= n0
let n0 = 1 6n2 + 16 <= cn2 ∀ n >= 1
let c = 22 6n2 + 16 <= 22n2 ∀ n >= 1
6n2 + 16 <= 6n2 + 16n2 ∀ n >= 1
When n is at least 1, each term on the right will be at least as big as the corresponding term on the left, so it's proven. (We chose c to be 22 because it's the sum of 6 and 16, for the specific purpose of leading ourselves to this outcome.)
Problem 2
- O(n). There are n/2 loop iterations, each of which swaps two list elements by accessing them via their indices. This is a total of n times that we've read a value from a list element via an indices, and n times that we've written one. When we perform 2n operations that each take O(1) time, we've done O(n) work total.
- O(1). The way to understand why is to consider what the function needs to store in memory.
- The value in the variable
i
. There is one such value at any given time.
- The range from which the values of
i
are determined. Notably, ranges take a constant amount of memory, because they don't store all of their values; since they're formulaic, their values are calculated as needed (one at a time), based on the three values start
, stop
, and step
.
- Tuples that hold the values of exactly two list elements, so they can be swapped via sequence assignment. There is only one such tuple at any given time.
The list x
is not something we'd count, mainly because it exists both before and after the function is called; it's something we already have, not something the function builds. So, ultimately, what we have is a constant amount of additional memory introduced by the function.
- The rewritten function requires the (asymptotically) same amount of time to run, O(n), because we're still looping over the list
x
(which contains n elements), and because appending an element to the end of a list (which we're doing in each loop iteration) can be thought to take constant time (since we're adding an element at a known index, len(x)
). However, the rewritten function requires O(n) memory, because it builds an entirely separate list from the one that was passed into it, and that list contains n elements.
Problem 3
- When we perform algorithm analysis, we're generally doing so before any code has been written, which means we'll lack the information we'd need to perform an analysis with a more thorough result than O-notation provides, anyway. Why that's not an impediment a lot of the time is because our goal is most often to determine whether our approach will meet our basic requirements; knowing a growth rate is often sufficient for determining this, because it's so often the case that an algorithm will either be substantially better or worse than what we need in practice.
- At this point, there are at least a couple of choices available:
- Decide whether your analysis is telling you that they'll both be fast enough for your needs. If so, you can base your decision on other things — which of them appears to be easier to implement, which of them will lead to code that will be more maintainable over time, which of them is already available in a library you trust (e.g., your language's standard library), which of them will use less memory — and rest assured that running time needs are met. "Fastest" is great, but "fast enough" is usually all we really need.
- If the algorithms are relatively straightforward to implement, build a proof-of-concept version of both algorithms. You can skip handling erroneous input and esoteric corner cases, focusing on the core behavior — unless those esoteric cases are the ones that will take the longest to run! — and just get them working generally. At that point, you can compare the running times to see whether there's a significant difference, then choose which one you want to finish.
This is certainly not an exhaustive list of the things you could do next. Various kinds of deeper analysis — especially when you know more about the internals of your programming language, your computer's hardware, and so on — are additional possibilities here.
Problem 4
- In this scenario, every time an element is added, we would expect it to take O(n) time, because the need to reallocate the list's internal structure would be required for every append operation. If we performed n such operations in a row, we'd expect it to require n * O(n) = O(n2) time total.
- In this scenario — which is akin to the actual behavior of a Python list — the first append would cause a reallocation of the list's internal structure, raising its capacity from n to 2n. This would require O(n) time. However, the remaining n - 1 append operations could be done directly into spare capacity, meaning they would each take O(1) time, or a total of (n - 1) * O(1) = O(n) time. The total, then, would be O(n) + O(n) = O(n).
So, we see that, overall, the use of spare capacity by Python lists is a tradeoff, which is an example of one of the classic tradeoffs we see in computer science: giving up memory to gain back a significant amount of time.
Problem 5
The minimum change necessary is to change the direction of the comparison used to determine whether we'll continue our search in the left or right half of the array.
elif items[middle] < key: