Naked Science Forum

Life Sciences => Physiology & Medicine => Topic started by: Tommo on 09/02/2010 21:47:10

Title: How do we hear in 3D when our ears listen in stereo
Post by: Tommo on 09/02/2010 21:47:10
With ears on each side of our head it is perfectly understandable that we can distinguish when a sound comes from the left or from the right.  But how can we tell when a sound is above or below or front and back.

A simple experiment shows the effect, close your eyes, move your hand around your head whilst clicking your fingers.  Your brain knows where the sound comes from.  The same works if someone else does the clicking.

How do we do it??????????
Title: How do we hear in 3D when our ears listen in stereo
Post by: Geezer on 09/02/2010 22:02:56
Our ears don't hear a single sound. They hear multiple "copies" of the sound but each "copy" has a slightly different phase relationship. The geometry of our ears influences these subtle phase effects and allows us to determine the direction from which the sound came.

(I'm hoping this is about right. - Let's see if someone knows the real answer!)
Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 10/02/2010 00:51:02
You have two points of input placed some distance from each other. If you now snap your fingers, no matter from what direction, above or at some side the sound-waves will take a different time to reach those points, you will also have reflected sound waves from walls etc that will add up to your whole experience.

I'm not sure it would be as easy to define from where a sound come without those reflections? As an example, imagine yourself hearing someone call you from out side your room. You will immediately know wherefrom it came.

Then you have "light treble sounds (over 1 kHz), the wavelength plays an essential role for the brain in determining the direction. These sounds all have a limited wavelength of less than 30 centimetres.

When a person hears sounds of limited wavelengths, the head functions as a screen. If the sound comes from a place to the right of the face, the head will prevent the sound waves from reaching the left ear. Deep base sounds, on the other hand, have a larger wavelength, and the head will not prevent the sound waves from reaching both ears.

If a sound does not come from the sides, but rather from above, below or immediately in front of the face, there is no time lag between the ears. In situations such as this, the outer ear is important as it helps determine the tone of the sound.

Experience has taught us that the tone can help determine the source of the sound. People riding a motorbike wearing a helmet, for example, often find it difficult to hear where an ambulance is coming from, as the helmet reduces the outer ear's ability to determine the tone of the sound."
 
So time lag, wave length, reflections and tone
I think that was what Geezer meant too.
Title: How do we hear in 3D when our ears listen in stereo
Post by: Geezer on 10/02/2010 05:34:29
Ah! Yes Yoron. I think what you say is very true, but I think each ear is capable, on its own, of determining information about the direction of the sound source.

If it was simply based on the time difference between the two ears would there not be many possible positions for a sound source? Would we be able to tell whether the sound was coming from in front of us, behind us, or even above us? We seem to be pretty good at that, so I think our individual ears must have some degree of directionality.

My dogs have highly directional ears. They can even swivel them around independently to determine the source of a sound. Humans don't have the swiveling feature (at least, not so many of them) but I think we too have some amount of direction detection capability built into each ear. I suspect the subtle structures of our external and internal ears have a lot to do with this so that a sound approaching from any direction in three dimensions sounds detectably different to us, even although we are not conscious of what that differnece actually is. Of course, I could quite possibly be talking baloney!

Where's Neil? He's an audiophile. He should know this stuff.
Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 10/02/2010 06:11:12
He bade me inform you that he was looking up what more submersibles there were, offering world tours, but he said he would be back in just a jiffy.
Title: How do we hear in 3D when our ears listen in stereo
Post by: Tommo on 10/02/2010 14:01:32
Many thanks to you all for your inputs.  I was talking to a colleague of mine on the same subject and it appears that Owls solve the problem of acoustic direction finding and have evolve to have 1 ear forward and higher than the other.  This gives them the ability to precisely locate prey rustling around in the undergrowth by sound alone.  How it does it though I am at a loss to say.
Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 11/02/2010 04:11:39
Tommo, are you planning on becoming a neurosurgeon :)

---Quote--

Konishi undertook a series of experiments on owls in 1977 to identify networks of neurons that could distinguish sounds coming from different locations. He used a technique pioneered by vision researchers, probing the brains of anesthetized owls with fine electrodes. With the electrodes in place, a remote-controlled sound speaker was moved to different locations around the owl’s head along an imaginary sphere. As the speaker moved, imitating sounds the owl would hear in the wild, the investigators recorded the firing of neurons in the vicinity of the electrodes. Over the course of several months, Konishi and Knudsen were able to identify an area in the midbrain of the birds containing cells called space-specific neurons—about 10,000 in all—which would fire only when sounds were presented in a particular location.

Astonishingly, the cells were organized in a precise topographic array, similar to maps of cells in the visual cortex of the brain. Aggregates of space-specific neurons, corresponding to the precise vertical and horizontal coordinates of the speaker, fired when a tone was played at that location. “Regardless of the level of the sound or the content of the sound, these cells always responded to the sources at the same place in space. Each group of cells across the circuit was sensitive to sound coming from a different place in space, so when the sound moved, the pattern of firing shifted across the map of cells,” Knudsen recalls. The discovery of auditory brain cells that could identify the location of sounds in space quickly produced a new mystery. “The in the auditory system, only the frequency of sound waves is mapped on the receptor layer, and the auditory nerve fibers project this map of frequency to the brain. How can the brain create a map of auditory space, based only on frequency cues?”

The answer, Konishi believes, may shed light on how the brain and the auditory system process all sounds.To enable the brain to process efficiently the rapid stream of impulses emanating from the hair cells in the ear, the auditory system must first filter out simple, discrete aspects of complex sounds. Information about how high- or low-pitched a sound is, how loud it is, and how often it is heard is then channeled along separate nerve pathways to higher-order processing centers in the brain, where millions of auditory neurons can compute the raw data into a recognizable sound pattern. This filtering process begins with the hair cells, which respond to different frequencies at different locations along the basilar  membrane. Hair cells at the bottom of the basilar membrane respond more readily when they detect high-frequency sound waves, while those at the top are more sensitive to low-frequency sounds.

David Corey compares the arrangement to the strings of a grand piano, with the high notes at the base of the cochlea, where the basilar membrane is narrow and stiff, and the bass notes at the apex, where the membrane is wider and more flexible. Hair cells also convey basic nformation about the intensity and duration of sounds. The louder a sound is at any particular frequency, the more vigorously hair cells tuned to that frequency respond, while their signaling pattern provides information about the timing and rhythm of a sound. Konishi hypothesized that such timing and intensity information was vital for sound localization. So he placed microphones in the ears of owls to measure precisely what they were hearing as the portable loudspeaker rotated around their head. He then recorded the differences in time and intensity as sounds reached each of the owl’s ears. The differences are very slight. A sound that originates at the extreme left of the animal will arrive at the left ear about 200 microseconds (millionths of a second) before it reaches the right ear. (In humans, whose sound localization abilities are keen but not on a par with those of owls, the difference between a similar sound’s time of arrival in each ear would be about three times greater.)

As the sound source was moved toward the center of the owl’s head, these interaural time differences diminished, Konishi observed. Differences in the intensity of sounds entering the two ears occurred as the speaker was moved up and down, mostly because the owl’s ears are asymmetrical—the left ear is higher than eye level and points downward, while the right ear is lower and points upward.

Based on his findings, Konishi delivered signals separated by various time intervals and volume differences through tiny earphones inserted into the owls’ ear canals. Then he observed how the animals responded. Because owls’ eyes are fixed in their sockets and cannot rotate, the animals turn quickly in the direction of a sound, a characteristic movement. By electronically monitoring these head-turning movements, Konishi and his assistants showed that the owls would turn toward a precise location in space corresponding to the interaural time and intensity differences in the signals. This suggested that owls fuse the two sounds that are delivered to their two ears into an image of a single source—in this case, a phantom source. “When the sound in one ear preceded that in the other ear, the head turned in the direction of the leading ear. The longer we delayed delivering the sound to the second ear, the further the head turned,” Konishi recalls.

Next, Konishi tried the same experiment on anesthetized owls to learn how their brains carry out binaural fusion. Years earlier, he and Knudsen had identified spacespecific neurons in the auditory area of the owl’s midbrain that fire only in response to sounds coming from specific areas in space. Now Konishi and his associates found that these space-specific neurons react to specific combinations of signals, corresponding to the exact direction in which the animal turned its head when phantom sounds were played. “Each neuron was set to a particular combination of interaural time and intensity difference,” Konishi recalls. Konishi then decided to trace the pathways of neurons that carry successively more refined information about the timing and intensity of sounds to the owl’s midbrain. Such information is first processed in the cochlear nuclei, two bundles of neurons projecting from the inner ear. Working with Terry Takahashi, who is now at the University of Oregon, Konishi showed that one of the nuclei in this first way station signals only the timing of each frequency band, while the other records intensity. The signals are then transmitted to two higherorder processing stations before reaching the space-specific neurons in the owl’s midbrain.

One more experiment proved conclusively that the timing and intensity of sounds are processed along separate pathways. When the researchers injected a minute amount of local anesthetic into one of the cochlear nuclei (the magnocellular nucleus), the space-specific neurons higher in the brain stopped responding to differences in interaural time, though their response to differences in intensity was unchanged. The converse occurred when neurons carrying intensity information were blocked. “I think we are dealing with basic principles of how an auditory stimulus is processed and analyzed in the brain. Different features are processed along parallel, almost independent pathways to higher stations, which create more and more refined neural codes for the stimulus,” says Konishi. “Our knowledge is not complete, but we know a great deal. We are very lucky. The problem with taking a top-down approach is that often you find nothing.” Konishi has been able to express the mechanical principles of the owl’s sound localization process as a step-by-step sequence. He has collaborated with computer scientists at Caltech in developing an “owl chip” that harnesses the speed and accuracy of the owl’s neural networks for possible use in computers.

---End of quote----
Title: How do we hear in 3D when our ears listen in stereo
Post by: Tommo on 11/02/2010 14:31:32
Yoron,  Thanks for the post.  Ohooooooo...... to be a neurosurgeon would be great. Unfortunately too much time has passed under the bridge to contemplate that, but time passes very quickly during those long boring night shifts. 

I am an avionics engineer and I work with, and understand, the principles of monopulse radar where range and bearing are derived from a single transmitted pulse.  On the antenna the beam is split into 2 beams and the return signal in one beam is compared with the phase of the other beam to give a Direction of Arrival (DOA), range is calculated from the time taken for the 2 pulses to arrive back at the antenna.  If our audio system works in a similar way to the phase relationship of the radar antenna I fail to see how our head can get elevation from this information.  We do have radar antennas which can get elevation data from a single pulse but relies on not a twin beam but a beam split into 4 beams.  The 2 left beams are compared to the 2 right beams to give azimuth and the 2 upper beams are compared to the 2 lower beams give elevation.  Again range is calculated by the arrival of all 4 beams.  If our heads had 4 reception receivers we would be able to calculate elevation but as we only have 2 lobes it defeats me how it does it, but the tricks that 5 million years of evolution has given us never fails to amaze me.
Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 11/02/2010 18:48:32
Yeah 'above' was the one that confounded me too. And as I tried it, snapping my fingers I realized that my other senses informed me too so how could I be sure ::))

I'm guessing that tone height have a lot to do with it, and then you could say you had , let's say three levels, under nose, nose level, above nose :). And then you will, as long as it's not a deep base sounds, still have different distances to your ears, ah thinking of it, that came out strange :) I meant that those base sound don't care for 'obstacles' it seems but you will still have the distances of course. Wonder what they meant there btw. A 'base sound' wont get distorted by the skull then? Or?? Awhh :) One should avoid those kind of questions ::)) . But maybe the brain creates 'matrix's' from experience, sorting them in as 'archetypes' that it then use to compare to the sound it hears, that should introduce a time-lag of course?

Aha, planning for the next generation of monopulse radar are we :)



Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 11/02/2010 19:13:07
As you like radars. Did you know that we had the best and coolest radar in the world years before NATO, that we now exchanged to NATOs inferior 'datalink' system. So Phucking sad, our system was so much better, greater bandwidth etc etc.



1. The PS-05/A can operate in passive mode, as a sensitive receiver with high directional accuracy (due to its large antenna). Two PS-05/As can exchange information by datalink and locate the target by triangulation.

2. The datalink results in better tracking. Usually, three plots need to track a target in track-while-scan mode. The datalink allows the radars to share plots, not just tracks, so even if none of the aircraft in a formation gets enough plots on its own to track the target, they may do so collectively.

3. Each radar plot includes Doppler velocity, which provides the individual aircraft with range-rate data. However, this data on its own does not yield the velocity of the target. Using the TIDLS, two fighters can take simultaneous range-rate readings and thereby determine the targets track instantly, reducing the need for radar transmission.

4. In ECM applications, one fighter can search, while the wingman simultaneously focuses jamming on the same target, using the radar. This makes it very difficult for the target to intercept or jam the radar that is tracking him. Another anti-jamming technique is for all four radars to illuminate the same target simultaneously at different frequencies.

The Swedish AF is the pioneer of fighter-to-fighter data-link, and the JAS-39 is the first fighter with the NG fighter-to-fighter data-link. However, almost every NG fighter in the world (F/A-22, F/A-18E/F, F-35, EF-2000, Rafale, Su-30MKK/MKI, Su-27SM, Su-35/37, MIG-31) has equipped or will equip soon the same class of NG fighter-to-fighter data-link since then. The Gripen was the first fighter with this kind of revolutional innovation, but it is not unique now.

Will the NG fighter-to-fighter data-link help the fighters like JAS-39 catch the stealthy target at longer distance??? I think the answer is Yes, since even the stealthy fighter can’t make its RCS in every direction as small as its frontal RCS. If you combine the data from the different fighters, AWACS, ground-based air-defense radar and so on in different location with the help of NG data-link, you may catch out the stealthy target earlier then just use the radar of your fighter’s own, as an old saying goes: The unite is the force

And a few words from the Hungarians, how they experienced the exercise Spring Flag in Italy 2007.

"The Gripens flew as part of the hostile ‘Red Force’, largely conducting beyond visual range air battles with the ‘Blue Force’. Colonel Kilian recalls, We flew 24 sorties over the two-week exercise, and we launched every day with our two planned Gripen Ds. We were the only participants to have a 100% operational record with the scheduled aircraft.

In Hungary we just don’t have large numbers of aircraft to train with, but in Spring Flag we faced COMAO (combined air operations) packages of 20, 25 or 30 aircraft. The training value for us was to work with that many aircraft on our radar – and even with our limited experience we could see that the Gripen radar is fantastic. We would see the others at long ranges, we could discriminate all the individual aircraft even in tight formations and using extended modes. The jamming had almost no effect on us – and that surprised a lot of people.

Other aircraft couldn’t see us – not on radar, not visually – and we had no jammers of our own with us. We got one Fox 2 kill on a F-16 who turned in between our two jets but never saw the second guy and it was a perfect shot.

Our weapons and tactics were limited by Red Force rules, and in an exercise like this the Red Force is always supposed to die, but even without our AMRAAMs and data links we got eight or 10 kills, including a Typhoon. Often we had no AWACS or radar support of any kind, just our regular onboard sensors – but flying like that, ‘free hunting’, we got three kills in one afternoon. It was a pretty good experience for our first time out."

Views from South Africa..

"Gripen is pretty much as agile it can get. G onset rate at least 6 G/s (1-9 G in 1.2 s), the Gripen platform is designed with tactics in mind. Gripen fight not only with missiles and bullets but with information, superior situation awareness is the key in modern warfare..

Gripens flight computer is outstanding, and can make some worldclass calculations. Gripens Fedec are highly impressive, it even has a backup mechanical calculation system. something only a handfull of companies can manage. The air craft also incorporate a very low radar profile making it hard to find. And it has a superior data link. And in real tests against other aircrafts the radar has been found very hard to jam by other systems, meaning that it will work in practice, not only in theory. And those countrys using it have found it working in all weathers.

The radar is capable of detecting, locating, identifying and automatically tracking multiple targets in the upper and lower spheres, on the ground and sea or in the air, in all weather conditions. It can guide four air to air missiles (AMRAAM, MBDA Meteor) simultaneously at four different targets. "

The Czech Air Force had this to say after testing the first generation Gripen 2005.

"Sweden required hard discretion related to ALL Gripen abilities information, but rumors say Gripen pilots used to call fox 3 (AMRAAM engagement) farther away than viper guys. When reporters asked guys from AFB Caslav to compare our new birds with another, they answered our fighters (model C block2) are the best HW currently available on the word market."

And also

"Since 1 May we have flown over 570 missions in total [figures as of mid-October] and since 1 July when were went operational on the QRA mission we have flown over 300 missions. We are very busy and we’re flying every day. Every aircraft flies at least twice, each day. We have eight pilots at the moment and sometimes we have all eight flying – and it’s not unusual to have all 12 aircraft operational and available on the line. We have never lost a single operational mission due to a technical snag with the aircraft and every single QRA mission has gone ahead as planned."

--------------------End of quotes--------

You need to be rather ‘nationalistically thickheaded’ not to understand what a nice packet the new Gripen NG will be. One of the things people love to lift up is this macho ‘battle testing’ Well, if you test yourself against ‘third world’ technology I don’t call it ‘battle testing’ maybe ‘endurance testing’ if it comes to that. Both are important aspects of a real War against a technological peer, but remember, Americans still haven’t ‘invaded’ any true technological peer, and, may I add, hopefully never will. As that as I see it would be the end of what we all trust in, democracy and a free society.

---
---Qoutes-----------------

Ericsson’s future airborne radar is Not Only a Radar, NORA, but also a complete electronic warfare system including jamming and data communication. The new radar will use an Active Electronically Scanned Array, AESA, built up with approximately 1000 individual transmit/receive modules. The antenna, mounted on a single-axis platform, will give well over 200˚ coverage in azimuth. NORA will offer superior performance by virtue of a number of core capabilities at Ericsson – beam agility, beam widening, multi-channel processing, target-specific waveforms and low radar cross-section.....

It's planned to scan +-60 deg electronically and 60 deg mechanically in azimut, permitting scanning over a 240 deg arc and electronically +-60 deg up and downwards. ...

Fully programmable signal and data processors enable the radar to handle these air defence, attack and reconnaissance missions. This also gives the radar a very high growth potential to meet future requirements. The radars flexible waveforms make it possible to avoid ambiguities and allow performance characteristics to be optimized for all operating modes. The radar also matches the data link requirements for advanced medium range missiles...Ericsson has started development work for upgrading the PS05/A multimode radar. Some of the up-grades have been possible to incorporate, since new, faster and more powerful processors and components have become available on the market. An essential part of these upgrades is a new data processor who will replace the D80 processor in the Systems Computer in Swedish Air Force Gripens. It is a Modular Airborne Computer System (MACS) with higher capacity. A significant upgrade of the signal processor is also included which will dramatically enhance functions in both air-to-air and air-to-ground missions....

Ericsson AESA (Active Electronically Scanned Array) is a new airborne radar project currently in development at Ericsson Microwave Systems. The AESA technology will improve the radars overall performance drastically, especially its target detection and tracking capability. Beam direction can be changed instantaneously, detection range will be considerably increased, and jamming suppression further improved. The AESA radar will feature multibeam capability with all beams individually and simultaneously controlled. It can also operate simultaneously as a fire control and obstacle warning radar, and be used both in intercept and ground attack missions. The multibeam concept also allows for radar operation, data linking, radar warning and jamming simultaneously. As a consequence of the very large number of transmitter and receiver modules, the radar will have a high system availability through graceful degradation...."

----------End Quotes-------------------


But since we got into NATO we have stopped using it. Politicians, the greatest idiots there are and their counterparts, those military imbeciles that licks *** with NATO. Why the fuc* should we downgrade our capabilities just to play in the big sandbox? The Americans refused to give us their 'codes' if we didn't used their inferior system ... And we ? Agreed.

So the last time I checked we were in between taking away our own datalink still waitin on theirs half-assed system. Yeah almost one year later and I'm still mad, both at Swedish and American idiocracy.. Who else except us Swedes would accept that. Do you think France would have done it... No Way.. USA itself, don't make me smile.. It hurts..

"TIDLS and Link-16 are two completely different systems! TIDLS can be used to fuse raw radar data and makes it possible for two or more Gripen to track an enemy in situations where one Gripen cant. AFAIK Link 16 does not have that capability. The only reason for having Link 16 in Gripen is NATO compatibility."

--Quoting myself :)
Our DataLink A have capabilities no other system have yet. And as I stated before, have been tested out by us through fifteen years of use. In our linksystem we have a true 'Internet' with true 'peer to peer' properties. If you look up why the Internet is constructed as it is, it was once to be able to withstand a atomic war, a long time ago. That's what we have in the air, neither the Americans nor Nato have this.
---End-
The-Radar-Game-Understanding-Stealth-and-Aircraft-Survivability (http://www.scribd.com/doc/2460745/The-Radar-Game-Understanding-Stealth-and-Aircraft-Survivability)

==
Gripen is one of the sweetest machines out there, It was well built from the beginning.

“There was one interesting problem,” Colonel Eldh concludes with a smile. “Gripen is supersonic at all altitudes and can cruise supersonically with an external load including fuel tank, four AMRAAM and two sidewinder missiles without the need to engage the afterburner

In the early days of operations, we found some pilots inadvertently flying supersonic over populated areas. The problem was one of habit, as these pilots had their throttle settings as high as on the older-generation fighters that Gripen replaced.

“It is fair to say there were a few startled people on the ground, as their day-to-day work, or perhaps sleep, was disturbed by unexpected sonic booms! It was, of course, a simple task to solve the problem – the throttles were re-set and everyone was happy.”



Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 11/02/2010 19:58:34
Link16 is not near our TILDS according to what I've heard.

Btw: It is well known now – but was once a highly-classified national secret – that Saab’s J 35 Draken was fielded with one of the world’s first operational datalink systems. Central to the Gripens warfighting capabilities is its unique Communication and Datalink 39 (CDL39), which is the best in the world.


TIDLS (datalink)

One Gripen can provide radar sensing for four of its colleagues, allowing a single fighter to track a target, while the others use the data for a stealthy attack. TIDLS also permits multiple fighters to quickly and accurately lock onto a target's track through triangulation from several radars; or allows one fighter to jam a target while another tracks it; or allows multiple fighters to use different radar frequencies collaboratively to "burn through" jamming transmissions. TIDLS also gives the Gripen transparent access to the SAAB-Ericsson 340B Erieye "mini-AWACs" aircraft, as well as the overall ground command and control system. This system provides Sweden with an impressive defensive capability at a cost that, though still high, is less than that of comparable systems elsewhere.

TIDLS can connect up to four aircraft in a full-time two-way link. It has a range of 500 km and is highly resistant to jamming; almost the only way to jam the system is to position a jammer aircraft directly between the two communicating Gripens. Its basic modes include the ability to display the position, bearing, and speed of all four aircraft in a formation, including basic status information such as fuel and weapons state. The TIDLS is fundamentally different from broadcast-style links like Link 16. It serves fewer users but links them more closely together, exchanging much more data, and operating much closer to real time.

TIDLS information, along with radar, EW, and mapping data, appears on the central MFD. The display reflects complete sensor fusion: a target that is being tracked by multiple sources is one target on the screen. Detailed symbols distinguish between friendlies, hostiles, and unidentified targets and show who has targeted whom.

Today, Sweden is the only country that is flying with a link of this kind.
The Flygvapnet has already proven some of the tactical advantages of the link, including the ability to spread the formation over a much wider area. Visual contact between the fighters is no longer necessary, because the datalink shows the position of each aircraft. Leader and wingman roles are different: the pilot in the best position makes the attack, and the fact that he has targeted the enemy is immediately communicated to the three other aircraft.

A basic use of the datalink is "silent attack." An adversary may be aware that he is being tracked by a fighter radar that is outside missile range. He may not be aware that another, closer fighter is receiving that tracking data and is preparing for a missile launch without using its own radar. After launch, the shooter can break and escape, while the other fighter continues to pass tracking data to the missile. In tests, Gripen pilots have learned that this makes it possible to delay using the AMRAAM's active seeker until it is too late for the target to respond.

But the use of the link goes beyond this, towards what the Swedish Air Force calls "samverkan," or close-cooperation. One example is the use of the Ericsson PS-05/A radar with TIDLS. An Ericsson paper compares its application, with identical sensors and precise knowledge of the location of both platforms, to human twins: "Communication is possible without explaining everything."

"Radar-samverkan," the Ericsson paper suggests, equips the formation with a super-radar of extraordinary capabilities. The PS-05/A can operate in passive mode, as a sensitive receiver with high directional accuracy (due to its large antenna). Two PS-05/As can exchange information by datalink and locate the target by triangulation. The target's signals will often identify it as well.

The datalink results in better tracking. Usually, three plots (echoes) are needed to track a target in track-while-scan mode. The datalink allows the radars to share plots, not just tracks, so even if none of the aircraft in a formation gets enough plots on its own to track the target, they may do so collectively.

Each radar plot includes Doppler velocity, which provides the individual aircraft with range-rate data. However, this data on its own does not yield the velocity of the target. Using the TIDLS, two fighters can take simultaneous range-rate readings and thereby determine the target's track instantly, reducing the need for radar transmission.

In ECM applications, one fighter can search, while the wingman simultaneously focuses jamming on the same target, using the radar. This makes it very difficult for the target to intercept or jam the radar that is tracking him. Another anti-jamming technique is for all four radars to illuminate the same target simultaneously at different frequencies.

Our Swedish Data-link updates every second (or faster:), as compared to Link16 (every twelfth second) This makes it possible for us to fly 'radar silent' and even shoot its missiles from it without any own radar. And the data-link is able to steer you in, in every detail (close control) through its data commands. Which means that Gripen will be very operational even with its radio totally jammed. The NATO variant Link16 can, if I'm correct, open up to four(?) 'timeslots/channels' and if you place them correctly in time, give you a update every third second. (But we can also do that kind of stuff and as our systems each update every second by themselves (or faster:) you might wonder how much info we would be able to transmit that 'NATO' way opening new 'timeslots'. Not that I know of course, just guessing here:)

Our system have the possibility to use AWACS and satellites and 'peer2peer'. It seems to me that Link16 first handedly is a 'centralized' system, now also trying to in cooperate some of the Swedish 'ideas'. As for what is best in a battle situation? I prefer the one with the most options myself, and that's not Link16. And it's not only Gripen using our system, it's used in all types of military vehicles, that's why it is so redundant. And that's why we still will have a 3-D sphere of information, even when all AWACS is down.

And this was the way we were coming to as i see it. And maybe even was at, it's still secret  so the info I have here is what the net can offer :) Remember, ours system was peer to peer.

A real radar.. (http://www.physorg.com/news151078629.html)


Btw: I have a example on what it might cost, using others 'benevolence' for your Nations Defence.

"The author was involved in the import of many software based
weapon systems while in Armscor in his youth. In each case
South African software engineers controlled the software and
made sure of the algorithm integrity. In each and every case
the imported software was found to be ”incompetent”, but
because of DoD policy in those days, the problem and above
equation was recognised and South Africa kept control of the
system software.

In the middle 90’s the author’s company sold a radar warning
simulator to a European NATO member for their F16s.
During the tests it was conclusively established that the F16
electronic warfare system was blind over half its intended
frequency spectrum. It was operationally useless. When
these aircraft were required for use in Bosnia, a US approved
operation, the USAF issued “combat software” with the “latest
upgrades”. After the deployment, the upgraded software was
erased. This NATO country could then only use its aircraft in
US approved operations and had in fact, no sovereignty."

From South African Army Journal 2008 issue 2.

-------End----------------------

TADIL J a secure, high capacity, jam-resistant, nodeless data link
which uses the Joint Tactical Information Distribution System (JTIDS) transmission characteristics and the protocols, conventions, and fixed-length message formats defined by the MIL-STD-6016.

NATO’s equivalent is Link16



"The US system uses a near-real time transmission method whereby data is collected into packets, known as Demand Assigned Multiple Access and it operates via UHF Satcom. This compares with the UK Satellite Tactical Data Link (STDL) which uses real rime Time Division Multiple Access (TDMA) as for Link 16 and transmits at Super High Frequency (SHF) Satcom." with a "Nodeless multi-netting support for up to 127 nets (but practical limit is stated to be 20)."

"With the deployment of S-TADIL J , operational units will have three possible data link paths that can be used to support multi-ship data link coordinated operations. S-TADIL J supports the same levels of surveillance and weapon coordination data exchange provided by Link-11 and Link-16. The TADIL J message standard is implemented on S-TADIL J to provide for the same level of information content as Link-16."

And "Utilising time division architecture, Link 16 JUs have pre-assigned sets of multiple time slots in which to transmit their data and to receive data from other units. The time slots of a net can be parcelled out to one or more Network Participation Group (NPG), which are defined by operational function and by the types of messages that will be transmitted in it"

'It's probably time division multiplexing (TDM), not token ring - which is more depending on physical topology/arch..' I haven't been able to find any transmission speed though so there I will trust in what I heard from our own tests, according to those the transmission rate and ability to handle connections still are second to our Swedish solution. Never mind, we will throw it away any way, it seems, just so we too can play Cowboys and Indians with those 'big boys'. in NATO.

But we need to differ between them. Link16 - 11 etc etc comes under TADIL J who is a acronym of a 'high speed' data linking net, but it's definitely not our Swedish Data link. So in a way we are very much comparing apples with oranges. That our system is perfectly adapted for our needs and is our equivalent of a central steered AWACS defense doesn't seem to stop our military and political geniuses from exchanging our fifteen year 'new' system for Link16 with its, from my perspective, more limited possibilities, as it's adapted to a more centralized fighting mode..

What that will have to do with defending my Sweden in case of a sudden attack beats me?
The best trained forces will be outside of Sweden.
The code keys will be in the States ::))
Fook**g brilliant.

Let me guess. A voice from the other side perhaps?
Whispering "No war in our time" ??



...        And here we have one 'version' of it       ...
(according to its admirers, but the only one I could find?)         

"Tactical Digital Information Link-J/NATO LINK-16 (TADIL-J)

Primary Purpose: Using the Joint Tactical Information Distribution System (JTIDS) equipment provides real-time exchange of tactical digital information between ...major command and control systems... for the United States, North Atlantic Treaty Organization (NATO), and allies. Pseudo random frequency hopping on 51 frequencies, encrypted. Frequency hopping rate is one pulse per 13 microseconds. Sub-Functions: Joint Tactical Information Distribution System (JTIDS).

Equipment Requirements:

Three groups of terminals: JTIDS Class 1 - First generation, single network; JTIDS Class 2 - Second generation, multiple network capability (AN/VRC-107 V 5-7); Multifunction Information Distribution System Low Volume Terminals (MIDS-LVT); Command and Control Processor (C2P)- Navy.

Connectivity Requirements: JTIDS frequency hopped/spread spectrum system requires at least 150 MHz bandwidth (data rate-28.8 Kbps to 238.1 Kbps). UHF (L Band).

Crypto Requirements: KOI-18 KGV-8B, Secure Data Units AN/CYZ-10, Data Transfer Device

Normal Location: Major command and control facilities; surveillance platforms; fighter and intercept aircraft; and air defense units.

Information Managed: Common Operating Picture (COP) and Common Tactical Picture (CTP).

Products Created: Interim JTIDS Message Specifications (IJMS); J Series messages.

Lead Service/Contractor: USAF.

Current Fielding Status: Major theater assets have or are being fielded. Established as the joint standard for future system and platform development. Other TADILs being consolidated into the TADIL-J format. TADIL-J being incorporated into the GCCS COP.

Known Problems: Dispersed (beyond 300 NM) theater-wide operations requires relay capability for extended line of sight (LOS) maneuvers. Program managers developing satellite based TADIL-J capabilities to eliminate this limitation. Additionally, limited distribution of the assets to conduct these operations.

DIICOE Compliance Rating: Not rated.
----
""The US system uses a near-real time transmission method whereby data is collected into packets, known as Demand Assigned Multiple Access and it operates via UHF Satcom. This compares with the UK Satellite Tactical Data Link (STDL) which uses real rime Time Division Multiple Access (TDMA) as for Link 16 and transmits at Super High Frequency (SHF) Satcom."

...With a "Nodeless multi-netting support for up to 127 nets (but practical limit is stated to be 20).""
Title: How do we hear in 3D when our ears listen in stereo
Post by: Tommo on 13/02/2010 15:38:38
Yoron,  Thanks for the info.  You seem to have a lot of info on Military Data Links.  The radar you refer to sounds like a smart piece of stuff.  Does Sweden export it?

Title: How do we hear in 3D when our ears listen in stereo
Post by: yor_on on 13/02/2010 17:27:25
He** Why not, we won't use it will we :)

Instead, in their infinite wisdom, those that 'know' have decided that we will trust in Natos ideas of 'Global battle' a. k. a Death-Star, With our very own Wader(s) looking out from the space fortress, commandeering and coordinating us mindless drones. Ever seen that theme before :)

To me it's a disgrace, but some really needed to feel their 'importance' grow I guess.

It have a nasty way to lift it's head when people start to think that 'this is the way it is and always will be'. The radar is incredibly sweet, and combined with smart aircrafts like our Gripen able to land almost anywhere and hide, as well as with all land forces of importance you will have a move able radar cover even without Awacs. And tracking can be done by all and then you just need to have placed the Gripen intelligently. They go up in 'silent mode' using 'samverkan' with the other radars through their datalink in "silent attack.". Never using their own radar they should be able to triangle the target(s), then flash it at some optimal time after releasing their missiles. Can you see what I mean :) They can release them before 'illuminating'.

Those are just guesses of course.

Well, I expect them to buy-able, especially if we're not 'allowed' to use it. All technology is 'best of date' sort of. But probably it is smartest to buy a 'packet' with those vehicles one needs, after all, we have tested it for a long time and, as far as I know, we don't deliver half of the 'capacity'