I think the efficiency that sorting algorithms hope to achieve is pretty interesting. A balance between space and time is chosen for each implementation.
This chart highlights the advantages and drawbacks of many sorting algorithms well. While graphing the run times of various sorting algorithms in lab 10, python's built-in list.sort() (aka. timsort) took me by surprise. The other sorts had a variety of approaches yet all lost to timsort. The short 'merge sort and insertion sort combined' explanation from the lab didn't show me why it would be more efficient. After reading a little more about timsort it's become more apparent. In real world data, values will be partially sorted here and there. By splitting the list into easy-to-merge chunks and sorting them beforehand, timsort achieves the speed it does.
One of the earlier labs, lab 5, had my partner and I write a function 'pythagorean_triples'. That was my first introduction to program efficiency. Our program checked every single integer within the given range even though a majority of them could be filtered out from the start. Did it really have to check if
5**2 + 5**2 = 5**2? I realised that writing a working function takes one level of understanding and making it efficient takes another. I will have to keep this thought in tandem while I write other definitions and not make any unnecessary comparisons or assignments in future code.