DonBrown,

the distance from G to the line AD (let's call it GE, which is the shortest distance and perpendicular to AD), and the values for AB, BC, and CD,

...if there are indeed infinite solutions, then the fact that you were able to compute these GE, AB, BC, and CD from (GA, GB, GC, and GD) must imply that somehow you have implicitly assumed a trajectory slope already before the program was run...?

No, I just ignore the slope (direction) of the trajectory - the diagram & geometry/trig are exactly the same whatever the direction.

Call AB=BC=CD=d and call DE=kd (some, not necessarily integer, multiple of d)

then old Pythagoras says GD^2=GE^2 + (kd)^2

GC^2=GE^2 + (d+kd)^2

GB^2=GE^2 + (2d+kd)^2

GA^2=GE^2 + (3d+kd)^2

Then we can combine pairs

GC^2 - GD^2 = d^2 + 2kd^2 = d^2(1+2k)

GB^2 - GC^2 = 3d^2 + 2kd^2= d^2(3+2k)

GA^2 - GB^2 = 5d^2 + 2kd^2= d^2(5+2k)

Since the LHS is known, I can express d as a function of k

eg. d = sqrt( (GC^2- GD^2)/(1+2k) )

I asumed (because all the GA, GB, GC, GD were decreasing) that k>=0 ( E is on the opposite side of D to A)

then chose an arbitrary range of values for k from 0 to 10 and calculated the value of d from each equation.

When I look at the results, the three equations give different values of d (as you would expect if I used the wrong value for k), but I looked for the set of results which were closest together.

Then I chose a new range of values for k, say 3 to 5 using smaller increments, looked for the closest results here and narrowed my range and increments again, etc.

Since I'm using a spreadsheet, I need only specify a start and an increment.

To help spot the closest set of results, I also compute the average d and the sum of the squares of their differences from the mean. With perfect data I would expect to find (at the correct value for k) that all three equations gave exactly the correct value for d and my error function would reduce to 0.

Because the data were not exact (I drew a diagram and measured the values) I never achieved a zero error, but the error function did exhibit a single minimum.

I did this by manually adjusting the trial value of k, but I would have thought one could use the goal-seeking tool. Unfortunately, when I tried this, I got 'no solution found' and the best attempts it offered were way outside the possible range. I don't understand why this happens. When I asked to get a zero error function, ok, that is obviously not going to happen. But after I found a near minimum error value and asked it to achieve something not quite as good, it still failed dismally. I plotted graphs of the error results and they appear to be continuous with a single minimum. When I get a chance I'll try it out in MS Excel and see if it can do any better (than OOCalc.)