**Moderators:** SecretSquirrel, just brew it!

3 posts
• Page **1** of **1**

So, I was looking over the Java API regarding Collections.sort() on linked lists, and it says it dumps linked lists into an array so that it can call merge sort. Because, otherwise, sorting a linked list in place would lead to a complexity of O(n^2 log n) ... can someone explain how this happens (The n^2 log n complexity)?

- ScythedBlade
- Gerbil First Class
**Posts:**112**Joined:**Sat Nov 06, 2004 10:43 am

This is off the top of my head, so someone can feel free to correct me:

Let us also assume that the sort used will require random access to the elements in the array (i.e. it will require access to any element at any time). For an array based structure, you'll be able to achieve that randomness using the element index, and that's an O(1) operation. Since we need to access each element at least once (O(n)), and we have log n comparisons to perform (average case for merge sort), we get O(n log n) complexity.

However, in a linked list, accessing a random element is an O(n) operation (you have to start at one end, and work your way to the other, by definition of a LL). Thus accessing each element has a complexity of O(n^2). Thus the final runtime complexity of O (n^2 log n).

Since dumping the linked list to an array is O(n), the dump and sort method would be O (n) + O (n log n), which is O (n log n) in the end.

Not familiar with the java API, and if it gives you a LL in the end. Assuming it does, the complexity does go up a little, but a couple small tricks will still keep it down to O(n log n).

Let us also assume that the sort used will require random access to the elements in the array (i.e. it will require access to any element at any time). For an array based structure, you'll be able to achieve that randomness using the element index, and that's an O(1) operation. Since we need to access each element at least once (O(n)), and we have log n comparisons to perform (average case for merge sort), we get O(n log n) complexity.

However, in a linked list, accessing a random element is an O(n) operation (you have to start at one end, and work your way to the other, by definition of a LL). Thus accessing each element has a complexity of O(n^2). Thus the final runtime complexity of O (n^2 log n).

Since dumping the linked list to an array is O(n), the dump and sort method would be O (n) + O (n log n), which is O (n log n) in the end.

Not familiar with the java API, and if it gives you a LL in the end. Assuming it does, the complexity does go up a little, but a couple small tricks will still keep it down to O(n log n).

Intel i7 860, Asus P7P55D Pro, 4x2GB Corsair XMS3 1600 (CMX4GX3M2A1600C9), EVGA GTX 560 Ti Superclocked

Seagate 7200.7 160GB, WD Caviar Black 640GB, WD Caviar Green 1TB, WD Caviar Green 2TB

Dell 2408WFP and Dell 2407WFP-HC for dual-24" goodness

Seagate 7200.7 160GB, WD Caviar Black 640GB, WD Caviar Green 1TB, WD Caviar Green 2TB

Dell 2408WFP and Dell 2407WFP-HC for dual-24" goodness

- emorgoch
- Gerbil Elite
**Posts:**686**Joined:**Tue Mar 27, 2007 11:26 am**Location:**Toronto, ON

It would otherwise be O(n^{2} log n) because an LinkedList is sequential access. It avoids this by dropping everything into an array which has constant time access.

So here is how it works: Merge sort is O(n log n) by itself but LinkedList has an additional iteration time of n to access every single element to ensure its sorted. So (if I am doing this correctly, I don't have a data structures text handy at the moment) you multiply the access time with the sort time which gives you O(n) * O(n log n) which equales to O(n^{2} log n).

By giving it constant time access in an array, multiplying access time by sort time gives you O(1) * O(n log n) which remains O(n log n), exactly the same as merge sort by itself.

Hope that helps.

edit: it appears that emorgoch beat me to the punch

So here is how it works: Merge sort is O(n log n) by itself but LinkedList has an additional iteration time of n to access every single element to ensure its sorted. So (if I am doing this correctly, I don't have a data structures text handy at the moment) you multiply the access time with the sort time which gives you O(n) * O(n log n) which equales to O(n

By giving it constant time access in an array, multiplying access time by sort time gives you O(1) * O(n log n) which remains O(n log n), exactly the same as merge sort by itself.

Hope that helps.

edit: it appears that emorgoch beat me to the punch

2600K @ 4.8GHz; XSPC Rasa/RX240/RX120 Phobya Xtreme 200; Asus P8Z68-V Pro; 16GB Corsair Vengeance 1333 C9; 2x7970 OC w/ Razor 7970; Force GT 120GB; 3x F3 1TB; Corsair HX750; X-Fi Titanium; Corsair Obsidian 650D; Dell 2408WFP Rev. A01; 2x Dell U2412m

- mortifiedPenguin
- Gerbil Elite
**Posts:**812**Joined:**Mon Oct 08, 2007 7:46 pm

3 posts
• Page **1** of **1**

Users browsing this forum: No registered users and 1 guest