A simpler explaination of Algorithm analysis and Big O?
by Sean Brady · in General Discussion · 06/26/2011 (5:52 pm) · 3 replies
Anyone have a simple explaination of the analysis of algorithms/Big O?
Just cant get it into my head. Can I implement them? Yes. Understand them enough to be able to know how efficient they are? Eehh... No.
Am I the only person to feel stupid when looking at this particular topic? It melts the brain.
Cheers in advance.
Just cant get it into my head. Can I implement them? Yes. Understand them enough to be able to know how efficient they are? Eehh... No.
Am I the only person to feel stupid when looking at this particular topic? It melts the brain.
Cheers in advance.
About the author
Professional mouth!, getting projects complete is the only problem.
#2
When an algorithm takes the same time to execute regardless of the number of inputs, it's said to execute in "constant time," or O(1). Finding the address of an object in an array, for example. It's always the same calculation - multiply the index by the size (in bytes) of an array element, then add the base address. The time required to do that is the same, whether there's one element in the array, or one million.
A "linear time," or O(n), algorithm's performance is in direct 1:1 proportion with the number of inputs. Incrementing the value of every item in an array, for example.
Beyond that, it starts getting complicated - to say the least. Consider a "flocking" algorithm where each of a collection of "birds" needs to find the direction and distance from itself to all of the others. For two "birds," that's two sets of calculations. For three "birds," it's six, for four "birds," twelve. The relationship between the number of "birds" and the number of calculations being done is no longer linear - it's a curve.
You'll also hear the expression "premature optimization is the root of all evil" from time to time. This is the kind of thing that it's referring to. Some folks, faced with the above flocking situation, will begin to optimize the direction & distance calculation, in extreme cases even resorting to doing the math in assembler. A better approach is to improve the algorithm, perhaps by taking advantage of the symmetry in the calculations. If "bird" A has calculated the distance from itself to "bird" B, then "bird" B need not repeat that; it can simply re-use those results. So, two "birds," one calculation. Three "birds," three. Four "birds," six. The relationship is still a curve, but it's a flatter curve.
Unfortunately, this is where my lack of formal training starts showing. I don't have the math background to demonstrate (much less prove!) whether either version of the "flocking" was O(log n), O(n^2), O(n*2), or whatever else. I have a gut-level, almost instinctive feel for why the second variation is an improvement over the first, but I lack the language with which to express it properly.
06/26/2011 (10:38 pm)
I haven't taken any formal classes - and from Willbkool's description of it, I'm somewhat grateful for that fact. But here's my take on it, as I've gathered from experience and self-directed study. Big-O notation describes an algorithm's performance as a function of the number of inputs. Some examples:When an algorithm takes the same time to execute regardless of the number of inputs, it's said to execute in "constant time," or O(1). Finding the address of an object in an array, for example. It's always the same calculation - multiply the index by the size (in bytes) of an array element, then add the base address. The time required to do that is the same, whether there's one element in the array, or one million.
A "linear time," or O(n), algorithm's performance is in direct 1:1 proportion with the number of inputs. Incrementing the value of every item in an array, for example.
Beyond that, it starts getting complicated - to say the least. Consider a "flocking" algorithm where each of a collection of "birds" needs to find the direction and distance from itself to all of the others. For two "birds," that's two sets of calculations. For three "birds," it's six, for four "birds," twelve. The relationship between the number of "birds" and the number of calculations being done is no longer linear - it's a curve.
You'll also hear the expression "premature optimization is the root of all evil" from time to time. This is the kind of thing that it's referring to. Some folks, faced with the above flocking situation, will begin to optimize the direction & distance calculation, in extreme cases even resorting to doing the math in assembler. A better approach is to improve the algorithm, perhaps by taking advantage of the symmetry in the calculations. If "bird" A has calculated the distance from itself to "bird" B, then "bird" B need not repeat that; it can simply re-use those results. So, two "birds," one calculation. Three "birds," three. Four "birds," six. The relationship is still a curve, but it's a flatter curve.
Unfortunately, this is where my lack of formal training starts showing. I don't have the math background to demonstrate (much less prove!) whether either version of the "flocking" was O(log n), O(n^2), O(n*2), or whatever else. I have a gut-level, almost instinctive feel for why the second variation is an improvement over the first, but I lack the language with which to express it properly.
#3
I will have to do alot more study to achieve my own 'C'. Cheers for points of info also.
The explainations have helped out alot.... Hopefully it will click enough to be able to add it to the skills to get a programming job. I am sure understanding and analysising algorithms along with discrete mathematics is a good thing for any entry level programmer.
06/27/2011 (3:01 am)
Cheers for explainations, gears in the head kind of turning now. Very slowly but has started. Better than being at a halt....I will have to do alot more study to achieve my own 'C'. Cheers for points of info also.
The explainations have helped out alot.... Hopefully it will click enough to be able to add it to the skills to get a programming job. I am sure understanding and analysising algorithms along with discrete mathematics is a good thing for any entry level programmer.
Torque Owner Willbkool
Gamebox Creations
What I remember from that class was avoid nested if statements, don't use inheritence unless needed. I could look at my notes, but I would rather forget that class. Now the professor knew the stuff, and was always doing proofs. I don't think that you are alone in this. Although it might be a necessary evil, it really is hard to wrap your head around this subject.