The HSCA Acoustical Evidence: Proof of a Second Gunman in the JFK Assassination

Users Currently Browsing This Topic:
0 Members

Author Topic: The HSCA Acoustical Evidence: Proof of a Second Gunman in the JFK Assassination  (Read 29018 times)

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1845

This won't work. The sound waves would have been coming from high above McClain and at a lateral angle from this right. At most, some of the waves might have encountered a portion of the top part of his right shoulder, resulting in minimal distortion, if any.

It appears it will work. McLain’s torso should block the direct path to the microphone.


Basic assumptions:
•   Microphone is on level with the handle bars, to be easy to reach.
•   Microphone is just forward of where the hands hold the handlebars, about 18 to 24 inches in front of the torso.

First, let’s talk about the horizontal angles.

Form previous work, I made a program that calculated the angles and distances for the limousine. At the time the fourth of five “shots” was fired, the motorcycle was in the fourth circle. This works to be about the position of JFK at Zapruder frame z174. At that moment, the horizontal angle back to the TSBD sniper’s nest was 20 degrees. The microphone would be, I think about 18 to 24 inches in front of his torso. Now coming in at a 20-degree angle, the center of the “acoustic shadow” would be displaced to the left by:

Between:
          18 * tan (20) = 6.6 inches
 And:
          24 * tan (20) = 8.7 inches

Where was the microphone? In the center behind the windshield? No.

http://mcadams.posc.mu.edu/russ/jfkinfo2/jfk5/hscashot.htm

Quote
Mr. ASCHKENASY - However, they will not look the same, because at the locations where they were picked up the motorcycle was in different orientation relative to the sound source, and as was discussed earlier, the windshield has an effect, the position of the microphone, which we suspect was on the left side of the motorcycle, those all would affect the quality, if I can call it that you know, the shape of the received muzzle blast.

So, the acoustic data that Weiss and Aschkenasy, seemed to indicate the motorcycle microphone was on the left side of the motorcycle.

So, about 6 to 9 inches offset of the acoustic shadow. Officer McLain was pretty big, I understand, so let’s say his torso was 22 inches wide, a bit of a conservative guess. So, his “acoustic shadow” would be offset about 8 inches. So instead of covering everything between -11 to + 11 inches (0 being the center), it would cover more like (-18 to + 4 inches). I doubt the microphone would be outside his hands. I think the maximum to the left it would be would be around -15 inches. So, I think horizontally, the microphone would very likely be in that ‘Acoustic Shadow’.


Of course, one could get around this by saying Weiss and Aschkenasy were mistaken. The microphone was on the right. But this would discredit them. They couldn’t tell from the acoustic data if the microphone was near the left edge of the microphone or the right?


Well, what about the vertical angle. At z174 the angle relative to the street would have been about 23 degrees. The sound wave would have to come down about 15 inches, with the microphone at the same level as his hands. How far would it down?

Between:
          18 * tan (23) = 7.6 inches
 And:
          24 * tan (23) = 10.2 inches

The “acoustic shadow”, 18 to 24 inches in front of Officer McLain, would only come down about 7 to 10 inches, at least 5 inches too high to miss exposing the microphone to a direct line to the sniper’s nest. And this is only taking into account the torso, not the head and helmet.



I think it likely the microphone should be in the acoustic shadow of the torso of Officer McLain.



But the HSCA acoustical experts tested for windshield distortion and found that it significantly weakened the recorded sounds, by 3-6 decibels, when the windshield was between the rifle and the microphone (8 HSCA 31). They also found that shots received from the sides and rear were not affected by the windshield:

Just the fact that the first three shots all contain windshield distortion is an amazing correlation. The fact that the fourth shot shows no such distortion is equally remarkable. The four correlations combined constitute powerful evidence that the dictabelt tape recorded assassination gunfire.

Not that remarkable, since it is likely they should have found distortion in the fifth shot, from the TSBD, due to Officer McLain’s torso. But let’s say they are right. How remarkable is this? One out of five.

Let’s not forget their other scores. They said the shots came from the TSBD, TSBD, TSBD, KNOLL, TSBD. Of the 15 impulse pairs with a strong correlation, how many fit this scenario? First of all, we have to subtract 4, because the determination of whether it was the TSBD or the KNOLL was based on what they felt was the strongest correlation, so those 4 are going to be automatically “right”. Of the other 11 less strong correlations, only 7 of 11 were correct. Not too far from 50-50. Not too impressive.

For the target, comparing where the target should be, based on the time, compared to where the target was, from the acoustic evidence, they were right 4 out of 15 times. About what one would expect from random results with only 4 targets. They were right about 1 out of 4 times.

Location of the limousine? Harder to judge, because I can’t find a good map showing the microphones for the second group ( 2 ( 1 ) through 2 ( 12 ) ). But fairly good but far from perfect. Yes, yes, I know, the correlation with the microphone locations is a “remarkable” match for a motorcycle moving pretty steadily along at 11 mph. A remarkable match, provided one first tosses out all the “false alarms”. Assuming one cherry picks the best microphone locations of the 15 correlations found.

Of course, if I cherry pick the microphone locations from the same data., the results are not so remarkably good.

Shot 1: Motorcycle at 2 (5 )
Shot 2: Motorcycle at 3 ( 5 ), motorcycle has shot forward at high speed and is near where it should be to record the shot at z313
Shot 3: Motorcycle at 2 ( 11 ), motorcycle has reversed course and headed back toward Houston Street
Shot 4: Motorcycle at 3 ( 8 ), motorcycle has reversed course again and is back to following the limousine at high speed.
Shot 5: Motorcycle at 3 ( 5 ), motorcycle has reversed course again and is heading back toward Houston Street a second time.

You see, the remarkable correlation of the data with the expected motorcycle speed depends heavily on which correlations are chosen as “good” and which are chosen as “false alarms”.


And we don’t know, but his may have been assisted by a partial search for matches. Not searching for matches where they couldn’t be. Like looking for a match with an early shot way down Elm street. Or looking for a match for a later shot on Houston Street. We still have no definitive statement from the BBN that all possible 2,592 combinations were hand checked in the 10 days available.


Well, hold on now. Let's be fair and accurate about this. Dr. Chambers was correct in saying that the HSCA experts found windshield distortions in the shots that should have contained them and did not find such distortions in the shot that should not have contained them. That, after all, is the main point. He simply misidentified which shot does not contain windshield distortion. It's an error, to be sure, but it does not affect the main point. So, it's not like this is some horrendous gaffe.

But, yes, even very good experts with exceptional qualifications make a mistake every now and then.

Dr. Chambers' chapter on the acoustical evidence is far superior to Sturdivan's errant and misleading chapter on the subject.

No. Dr. Chamber’s gaffe is pretty serious.

He wrote a book about the assassination. He wrote a chapter about the acoustics. He gave us the impression that he was looking independently into the evidence, and found that Weiss and Aschkenasy did analyze the evidence correctly. But we know he did not do this.

•   He did not look at the waveforms for the “grassy knoll” shot and found that there were, indeed, no distorted waveform there. We know that because Weiss and Aschkenasy did find this waveform to be distorted.

•   He did not check the BBN map, as I did, to make certain that Weiss and Aschkenasy were correct that the motorcycle position would not cause the windshield to distort the sound from the grassy knoll shot.

•   He didn’t even check the HSCA reports to confirm that it was, indeed, the grassy knoll shot that would not cause windshield distortion, according to Weiss and Aschkenasy.

Oops.

Instead, it looks like Dr. Chambers just went with his impressions, that Weiss and Aschkenasy said the grassy knoll shot should not contain windshield distortions, and indeed did not contain windshield distortions, and just rubber stamped the claim he thought they had made. And said, “yes, they got it right alright”. That is what it looks like to me. Not impressive.

Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 1529
    • JFK Assassination Website
In order to fully understand and appreciate the HSCA acoustical evidence, one must understand the phases and nature of the acoustical analysis.

* When BBN received the Dallas police dictabelt recording, they understood that “if gunfire had been recorded on Channel 1, the analysis of that tape could be expected to reveal patterns of transient waveforms that would be generally characteristic of the shock wave produced by the bullet, of the loud and impulsive noise of the muzzle blast, and of echoes of each” (8 HSCA 42). They also knew that “it could further be expected that the major components of the shock wave would appear in the 1-kHz to 3.2-kHz frequency band” (8 HSCA 42).

Thus, the BBN experts knew that, if they were dealing with impulse patterns caused by gunfire, each of their sounds would come in a specific order: they knew that the shock wave (or N-wave) would come first, followed by the muzzle blast, followed by echoes of the shock wave and the muzzle blast. They also knew that the presence of these patterns would depend on whether the microphone was in position to record them.

* In order to determine if gunshot waveform patterns were present on the police tape, the BBN acoustical scientists first had to apply two filters to the tape to filter out sounds that were outside the 1.0 to 3.2-kHz frequency band and to filter out background noise, especially the engine noise, that could mask the presence of gunshot-like impulse patterns.

* BBN applied the two filters to the entire 5-minute police tape (8 HSCA 42).

* BBN then took the recorded outputs from both filters for the entire 5-minute tape and converted the sounds into images, such as oscillograms and spectrograms, so that they could analyze the impulse patterns in minute detail. An example:



* The BBN scientists then analyzed the peaks and lines of all the impulses on the dictabelt recording. They did not just examine this or that part of the graphically displayed tape, but the entire tape.

* Also, at this point, the BBN scientists did not know if any of the impulse patterns would pass the initial five gunfire screening tests, and they had no idea which motorcycle may or may not have recorded gunfire in Dealey Plaza. They were simply looking for any and all patterns that resembled waveforms with the general characteristics of gunshots.

Keep in mind that they had no test-firing data from Dealey Plaza yet, because the test firing had not yet been conducted. Also keep in mind that they had not yet even subjected any of the impulse patterns to the five screening tests. 

* When the BBN scientists analyzed the graphical representations of all of the dictabelt tape’s impulse patterns, they found six patterns that they believed should be subjected to the five screening tests.

* BBN then applied the five screening tests to the six impulse patterns. If a pattern failed even one of the screening tests, it was disqualified from further analysis. Five of the six impulse patterns passed the screening tests.

The one pattern that did not pass failed three of the five screening tests: it failed the test for duration, the test for amplitude, and the test for shape. The test for shape examined whether the impulses within the pattern resembled the general characteristics of a shock wave and a muzzle blast.

Until today, I believed that the failed impulse pattern—the 4-second pattern discussed in previous replies--failed two of the five screening tests, but a closer reading of the BBN report clearly shows that it failed three of them. The BBN report notes that the 4-second impulse pattern was not only too short and of insufficient amplitude, but that it lacked “shapes similar to the expected characteristics of a shock wave and of a muzzle blast,” whereas the five other impulse patterns had those characteristics (8 HSCA 43).

This was the end of the preliminary analysis.

* At this point in time, the BBN scientists had no information on any motorcycle’s location in Dealey Plaza; they had no microphone-location data from the test firing to determine any possible motorcycle positions; and they did not even know if any of the five passing impulse patterns would pass the far more rigorous analyses based on the test-firing data.

* After studying the results of the screening tests in the preliminary analysis, the BBN scientists contacted the HSCA and advised the committee that they needed to conduct a test firing in Dealey Plaza before they could do further analysis of the dictabelt recording. The HSCA then asked two acoustical experts from Queens College, Mark Weiss and Ernest Aschkenasy, to review the BBN analysis and the BBN test-firing plan. WA found the BBN analysis to be accurate and the BBN reconstruction plan to be sound.

* The Dealey Plaza test firing was then conducted. Shots were fired from two locations: the sixth-floor Texas School Book Depository window that Oswald allegedly used and a spot on the grassy knoll. 36 microphones were placed on Houston Street and on Elm Street to record the sounds of the shots.

* The impulse patterns from the test-firing shots were then compared with the dictabelt impulse patterns that passed the gunfire screening tests. All five of the dictabelt impulse patterns were found to match patterns of some of the test-firing shots. The matches were based on the echo patterns of the respective impulse patterns. The odds that these correlations were due to chance are astronomically low.

We can suppose that one or perhaps two random noise patterns might, by rather remarkable chance, match the echo patterns of shots fired in Dealey Plaza within a 10-second period. But the idea that random noise patterns would match the echo patterns of five shots fired in Dealey Plaza within a 10-second period seems impossibly far-fetched.

And note that the acoustical scientists had not yet attempted to determine if the matches occurred in the correct topographic (locational) order in relation to the movement of the motorcycle. So the suggestion that the BBN scientists "only checked for matches where they anticipated where the motorcycle might be" is not only erroneous but displays a misunderstanding of the nature and timeline of the acoustical analysis.

* When the HSCA acoustical scientists studied the correlations further, they made two surprising discoveries. They found that when the locations of the microphones that recorded the matches were plotted on a graph showing time and distance, the microphones were grouped around a line on the graph that matched the known average speed of JFK’s limousine on Elm Street.

More important, they found that the matches occurred in the correct topographic (locational) order. The first dictabelt gunshot impulse pattern matched a test shot recorded on a microphone on Houston Street, close to the intersection with Elm Street. The next dictabelt gunshot impulse pattern matched a test shot recorded at the next microphone farther north on Houston Street. The third dictabelt gunshot impulse pattern matched a test shot recorded on a microphone in the intersection of Houston and Elm. The fourth dictabelt gunshot impulse pattern matched a test shot recorded on a microphone farther down on Elm Street. And the fifth dictabelt gunshot impulse pattern matched a test shot recorded on the next microphone on Elm Street.

The odds that these stunning topographic correlations could be coincidence are 125 to 1 against. Why? Because there are 125 ways that any five events can be sequenced, e.g., 5-2-4-1-2, 2-1-4-5-3, 4-5-3-1-2, 3-5-4-2-1, 3-2-4-1-5, etc., etc. Only one of those 125 ways is 1-2-3-4-5. In other words, the odds that these locational correlations are the result of chance are 124 out of 125, or 99.20%, against.

* The HSCA acoustical experts found four more impressive correlations.

They found that each dictabelt gunshot impulse pattern that was recorded when the motorcycle was in position to record the shot’s N-wave does in fact include an N-wave.

They found that in each shot with an N-wave, the N-wave comes in the correct order and interval in relation to the muzzle blast that comes behind it. When a rifle shot is fired, the first sound to reach a properly placed microphone will be the N-wave, followed by the muzzle blast, followed by the echoes of the N-wave and of the muzzle blast. N-waves come 10-30 milliseconds before the muzzle blast, depending on the rifle’s muzzle velocity and other factors. The N-wave in the dictabelt grassy knoll shot comes 24 milliseconds before the muzzle blast.

They found that each shot with an N-wave includes echoes of the N-wave and of the muzzle blast, as it should, and that the echoes come in the correct order and interval in relation to the N-wave and the muzzle blast.

Finally, the HSCA experts found that windshield distortion occurs in the dictabelt gunshots where it should occur, and does not occur in the one gunshot where it could not have occurred. The HSCA acoustical experts tested for windshield distortion because they knew it could substantially affect how the sounds of gunshots were recorded. They found that the dictabelt impulse patterns that contain windshield-distortion characteristics were recorded when the motorcycle’s windshield was between the shooter and the microphone. Conversely, they found no signs of windshield distortion in the one dictabelt impulse pattern that was recorded when the windshield was not between the shooter and the microphone. These are stunning correlations.

The HSCA report’s comments on the windshield-distortion correlations deserve another reading:

Quote
Weiss and Aschkenasy also considered the distortion that a windshield might cause to the sound impulses received by a motorcycle microphone. They reasoned that the noise from the initial muzzle blast of a shot would be somewhat muted on the tape if it traveled through the windshield to the microphone. Test firings conducted under the auspices of the New York City Police Department confirmed this hypothesis. Further, an examination of the dispatch tape reflected similar distortions on shots one, two, and three, when the indicated positions of the motorcycle would have placed the windshield between the shooter and the microphone. On shot four, Weiss and Aschkenasy found no such distortion.(55) The analysts' ability to predict the effect of the windshield on the impulses found on the dispatch tape, and having their predictions confirmed by the tape, indicated further that the microphone was mounted on a motorcycle in Dealey Plaza and that it had transmitted the sounds of the shots fired during the assassination. (HSCA report, pp. 74-75)
« Last Edit: October 01, 2020, 02:31:00 AM by Michael T. Griffith »

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1845

More important, they found that the matches occurred in the correct topographic (locational) order. The first dictabelt gunshot impulse pattern matched a test shot recorded on a microphone on Houston Street, close to the intersection with Elm Street. The next dictabelt gunshot impulse pattern matched a test shot recorded at the next microphone farther north on Houston Street. The third dictabelt gunshot impulse pattern matched a test shot recorded on a microphone in the intersection of Houston and Elm. The fourth dictabelt gunshot impulse pattern matched a test shot recorded on a microphone farther down on Elm Street. And the fifth dictabelt gunshot impulse pattern matched a test shot recorded on the next microphone on Elm Street.

The odds that these stunning topographic correlations could be coincidence are 125 to 1 against. Why? Because there are 125 ways that any five events can be sequenced, e.g., 5-2-4-1-2, 2-1-4-5-3, 4-5-3-1-2, 3-5-4-2-1, 3-2-4-1-5, etc., etc. Only one of those 125 ways is 1-2-3-4-5. In other words, the odds that these locational correlations are not the result of chance are 124 out of 125, or 99.20%.

Mr. Griffith corrected me on proper English, on when to use “it’s” and “its”. Something I should have learned in high school. Let me correct him on a little math on something he should have learned in high school.

What are the odds of five things ending up in the correct order? One out of 125? How did he get that? Five to the third power?

No, the number of permutations of a set with “n” members is n-factorial. This is:  n * (n-1) * (n-2) * . . . 2 * 1. So, the number of ways to order a set with 5 members is 120.


Now, what about the remarkable order of the apparent position of the motorcycle over time. How it appears to be moving up Houston and then down Elm Street at a study pace. Well, it turns out, that this depends heavily on cherry picking the data, as the BBN did. For instance, the BBN found correlations for the microphones for the first “shot” at microphones 2 ( 5 ), 2 ( 5 ), 2 ( 6 ) and 2 ( 6 ). Quite good. For the second “shot” at microphones 2 ( 6 ), 2 ( 6 ), 2 ( 10 ) and 3 ( 5 ). Wildly bad. They found correlations along a stretch of about 84 feet along Houston and Elm. But the rest were not as bad.

If you make careful selections of which correlation that is considered “good”, then for the 5 shots (including Dr. Thomas’s fifth shot) you get:

2 ( 5 ), 2 ( 6 ), 2 ( 11 ), 3 ( 4 ) and 3 ( 5 ). If the first microphone at 1 ( 1 ) is considered at distance 0, and we assume a distance of 12 feet between each microphone, we get distances along this track of:

168, 180, 204, 252, 264 feet.

Note: These distances are very rough. And the second section of microphones 2 ( 1 ) through 2 ( 12 ), is much shorter than the other two, because they did not arrange these microphones in a linear line but bunched them up over a short distance around the bend of Houston to Elm, where the street was widest.

Ok. This is quite good. We get a nice steady pace of around 11 mph. Sounds quite plausible.


But what if we make a different selection of which correlations are considered good? In that case we can get:

2 ( 5 ), 3 ( 5 ), 2 ( 11 ), 3 ( 8 ), 3 ( 5 ). This gives us the distances along this track for the 5 shots of:

168, 264, 204, 300, 264 feet.

This gives us a much more erratic pattern. It appears the motorcycle initially speed forward at a tremendous speed between shots 1 and 2, then reversed direction to head back toward Houston to record the third shot, then again reversed direction and rode down Elm Street to record the fourth shot, before reversing direction for the third time and heading back toward Houston to record the fifth shot.

You see, a lot depends on which correlations you decide you like. And which ones you decide to reject as “false alarms”.


Ok, let’s do it one more time. This time, not trying to make the data look good or bad. Whatever correlation we get, we will accept its position, then select the average of all the correlations for that shot. In that case, we get the distances of:

174, 207, 204, 280, 280 feet.

Well, not too bad. A couple hiccups here and there. An apparent reversal between the second and third shot. And the motorcycle appears stopped for the fourth and fifth shot. But still, not too bad.


Well, why didn’t they do the obvious thing? Make their position estimate of the motorcycle based on the average result they got from their data. Because their data seemed in many ways to be random.

For different widely space targets were used. Of the 15 found correlations, only 4 corresponded with the location of the limousine at that time. About a success rate of 1 in 4, exactly as to be expected with random results.

For the source of the gunfire, the correlations were not consistent. Of the four judged shots with multiple correlations, three of them gave two different location for the source of the fire, the TSBD and the Grassy Knoll. Only the last shot, with three correlations, did all three agree on the same source, the TSBD. Of the 12 test shots, 8, or 67 percent came from the TSBD. Of the 15 found correlations, 12, or 80 percent, came from the TSBD. Again, the results are close to what one would expect of random data.

Only with the location of the limousine could a case be made for good, non-random data. Provided the data was cherry picked. If this was done, then it could be argued, as the BBN did, that the data supports a plausible, fairly constant 11 mph speed for the motorcycle with the stuck microphone.


But how did they make their case? Did the propose using cherry picked data. And also presented arguments in favor of using all their data, and using the average location to show the position of the motorcycle. But argue for why the cherry-picked method is better. No. they never made any arguments about why using cherry picked data is superior to using the averages of the data. They simply presented their cherry-picked data, in the form of a map of Dealey Plaza with four circles, nicely spaced, which support a near constant speed of 11 mph for the motorcycle.



One can still argue, well maybe there is not a 1 in 120 chance that we get such a correlation of the speed of the motorcycle with a 11-mph steady progress. But the data is consistent with a motorcycle progressing up Houston and Elm in generally the right direction. This suggests that the data is not random. But there is one more thing to considered, beyond cherry picking which correlations to use, that could skew the data. And that is deciding which part of the raw data to search for comparisons. It would be natural to assume they searched for all possible 2,592 combinations, of the:

•   432 recordings from the 1978 tests, 36 microphones, each recording 12 test shots.
with:
•   The six suspect impulses from the 1963 Dictabelt recording.

But they had only 10 days to do all these comparisons, and the associated calculations, to find the strongest correlations. So, it is plausible they only searched for where there could be a valid correlation. So, after finding one shot, likely the second, skip searching the hundreds of possible comparisons of early shots with the microphones that are on Elm Street, where they couldn’t be. And skip searching the hundreds of possible comparisons of the later shots along Houston Street and around the bend, where they couldn’t be. This approach may have been dictated by time constraints, 10 days between the shooting tests and the making of the phone call to the HSCA reporting on the finding of 15 correlations.

And the search may have been a good deal more focused than that. After finding one shot, let’s say the second shot, they might initially search the section that is consistent with a 11 mph for the first shot. Upon finding random correlations that convince them that the motorcycle is indeed moving at about the same speed as the limousine, they could do the same with the third, fourth and fifth shot, resulting in data that matches fairly well, with a 11-mph motorcycle. Even though the data is random noise.


BBN did all they could to make the case that their data was good. Cherry picked their data. Drew a map with 4 circles that showed how consistent this cherry-picked data was with a 11-mph motorcycle. If there was more there could have been done, they would have done it.

They could have said “It takes about ‘x’ minutes to make one comparison and the associated calculations”. The rate can be maintained for 6 hours during a hard-working day. We had ‘y’ men on this job. After a couple of days, we could process ‘z’ combinations per day. By the eighth day, we had completed all 2,592 combinations. This would help nail down the somewhat good correlation with the data and the location of the motorcycle. But there is no statement to this effect. No statement that unequivocally states that all 2,592 comparisons where actually done and completed.

Combined with the randomness of the data, except in regards as to the apparent speed of the motorcycle, I believe that only a subset of the 2,592 comparisons was actually made.

Offline Jerry Freeman

  • Hero Member
  • *****
  • Posts: 3723
Yes, but my question stands...
That is [I guess] agreement that it was another 'remarkable coincidence'. To quote your previous statement...."What are the odds"?
Quote
And these communication breakdowns, a stuck microphone key, was hardly an unknown event and was always plaguing the police department.
  Really? I would like to see some documentation of this.
I believe this acoustical jive is a waste of web space so if you don't reply...no loss of skin.

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1845

That is [I guess] agreement that it was another 'remarkable coincidence'. To quote your previous statement...."What are the odds"?   Really? I would like to see some documentation of this.
I believe this acoustical jive is a waste of web space so if you don't reply...no loss of skin.

If there are no compelling reasons for the conspirators to shut down Channel 1 for about 5.5 minutes, and only about the first 3 minutes after the assassination, then, yes, I think this should be looked at as a coincidence.

Offline Joe Elliott

  • Hero Member
  • *****
  • Posts: 1845

Remember, the BBN’s claim that they found gunshots recorded does not rest on them finding sound impulses on the recording. There are sound impulses throughout the recording. This claim solely rests on the correlations found between their 1978 firing tests and the 1963 Dictabelt recording. So, their claim rests on the strength of Exhibit F-367, a table of all found correlations. Do these correlations seem good? Do they contradict themselves? Do they make sense?



How to Analyze Data. How to recognize random data.


Let’s say the BBN analyzed all the found correlations and produced a table of them and ended up something like this:

********** False Table used only to make a point **********

Shot 1:          from-TSBD          fired-at-Target-1          correlation:9
Shot 1:          from-KNOLL          fired-at-Target-3          correlation:6
Shot 1:          from-TSBD          fired-at-Target-3          correlation:7

Shot 2:          from-TSBD          fired-at-Target-1          correlation:7
Shot 2:          from-KNOLL          fired-at-Target-2          correlation:9

We have correlations that contradict each other. We find correlations that match the first shot with a test shot from the TSBD and the KNOLL. Does this mean the data is bad? That we may be looking at random data?

No necessarily. It may mean we have set our correlation threshold too low. Allowing us to find some correlations that are real, but also others that are not, just due to a fluke. Setting the correlation threshold too low collects the good correlations but will also collect the false correlations. So, the next thing to do is select a higher correlation, like 9, and see what that gives us:

********** False Table used only to make a point **********

Shot 1:          from-TSBD          fired-at-Target-1          correlation:9

Shot 2:          from-KNOLL          fired-at-Target-2          correlation:9

This is much better. We don’t have any correlations that contradict each other. If we had such a table for 4 shots, and they were consistent with BOTH the location of the microphone and the location of the target, then we may have good data. It would meet the minimum qualifications.


For the real BBN data from Exhibit F-367 we have correlations that contradict each other. For 3 of the 5 shots, correlations are found for both a shot from the TSBD and a shot from the Grassy Knoll. Clearly, we have the correlation threshold set too low. The only way to eliminate the contradictions is to raise the threshold from 6 to 8. If this is done, we get:

********** The real data, with the correlation threshold set at 8 **********
TestBeginning Time ofZap.Zap.Microphone ArrayRifleTargetCorrelationStrongFluke
IDFirst impulse onFrameFrameandLocationLocationCoefficient**
Tape Segments (sec)BBNThomas(Channel Numbers)
B137.701681762 ( 5 )TSBD*10.8Strong
D137.701681762 ( 6 )TSBD30.8StrongFluke
G139.271962052 ( 6 )TSBD*30.8Strong
L145.153043133 ( 4 )KNOLL30.8Strong
O145.613133213 ( 5 )TSBD30.8StrongFluke
P145.613133213 ( 6 )TSBD40.8Strong

First of all, it was necessary to eliminate the Dr. Thomas shot. The problem was not that it only had one correlation to support it. That was actually a good thing. The problem was its correlation was too low, at 6. It should have a stronger correlation of 8, like all our remaining strong candidates

But even these ‘strong’ candidates have problems.

For the first shot, we have a contradiction. We found correlations for both a shot at Target 1 and Target 3. Those targets are over one hundred feet apart. And only a result of Target 1 makes sense for such an early shot. In any case, a correlation of 8 is still not high enough to eliminate all the random correlations.

Also, for the first shot, if one argues the waveform would be similar for both Target 1 and 3, why did the BBN test firing at different targets? Why not just use the same target if the target location makes little or no difference? Clearly, they thought that it would make a difference. And why didn’t they get a strong correlation for Target 2 when it was tried. Strong correlation for Targets 1 and 3, but not for Target 2 which was in between? It looks like random results.


For the second shot, we have no contradictions. We found one correlation which is good. Ideal really, But the correlation found is for Target 3. It should have been found for only Target 1 or 2, or both, since the limousine would have been between both targets. But not at Target 3. Why would they get a strong correlation for Target 3 but not for Target 1 or 2? It looks like we got another random match. A correlation of 8 is still not high enough to eliminate all the random correlations.

For the third shot, no problem. Only one correlation found, which is ideal. A shot at Target 3, which is good, right where the limousine should be. If only the other shots had no clearly random results.

For the fourth shot, we have contradictions. Correlation with both Target 3 and 4, which are 240 feet apart. We should only find a correlation for Target 3. Again, a correlation of 8 is still not high enough to eliminate all the random correlations.


At this stage we should try setting the correlation threshold higher. But we can’t. The highest correlation in the data is 0.8. If we set it any higher, we have no more correlations.


This data looks like random data, particularly with the correlation threshold set at 6.

The Target locations seem random and do not track the real limousine location. Only 4 of 15 correlations give good Target locations, which is no better than random luck.

The shooter locations contradict themselves, with 3 of the 5 shots matching test shots from both the TSBD and the Grassy Knoll. In the 1978 12 test shots, 67 percent of the shots came from the TSBD and a similar 80 percent of the correlations were from the TSBD. This again looks like it could be random data.

Only the motorcycle location seems good, if the data is cherry-picked. But this may also be a factor of the BBN only checking the areas where the motorcycle could be, if travelling at a steady 11 mph. Naturally, any correlation found would match that of a motorcycle moving at a steady 11 mph.

If the data looks random, that is a fatal flaw. Writing up a bunch of words like the BBN did in their report is like putting lipstick on a pig. The BBN case stands or falls with F-367. And it falls.


Mr. Griffith will try to draw the readers attention away from BBN’s Exhibit F-367. Or if he does deal with it, he will only want to discuss the motorcycle position correlation, which may be a result of which data the BBN decided to analyze. He won’t want to talk about the correlations dealing with the shooter location or the Target location. And not discuss why a correlation of only 6, the Dr. Thomas shot, should be taken seriously.

Let’s see if he can use Table F-367 to defend Table F-367.

Offline Michael T. Griffith

  • Hero Member
  • *****
  • Posts: 1529
    • JFK Assassination Website
In my previous reply, I explained the timeline and nature of the HSCA acoustical analysis. However, I purposely did not go into much detail on the grassy knoll shot, because that would have made the reply about 50% longer, and it was already long enough. When you understand how Weiss and Aschkenasy (WA) confirmed the grassy knoll shot, you more fully understand the powerful nature of the acoustical evidence.

* The grassy knoll shot is the 145.15 shot, the third of the four shots that the HSCA acknowledged on the dictabelt recording. The BBN scientists noted that this was shot was aimed at JFK when the limousine was “near the limousine position seen in frame 313” (8 HSCA 6).

* The main reason that WA were asked to review the BBN analysis was that BBN said the grassy knoll shot had a certainty factor of only 50%. Everyone recognized that if the 145.15 shot was confirmed to be a shot from the grassy knoll, this would automatically prove that more than one gunman fired at JFK, since no one doubted that at least one shot was fired from behind, and since the sixth-floor gunman could not have fired a shot from the grassy knoll.

BBN said the three other shots had much higher certainty factors:

1st shot: 88% (based on three matches)
2nd shot: 88% (based on three matches)
4th shot: 75% (based on two matches)

* The third shot had a 50% certainty factor because it matched one test-firing shot from the grassy knoll but matched two test-firing shots from the TSBD. The grassy knoll match had a correlation coefficient of 0.8, a very high coefficient, whereas the two TSBD matches had a correlation coefficient of 0.7. Only five other matches of the 15 matches (really correlations) had a coefficient of 0.8. Based on this fact and on other factors, the BBN scientists concluded that the third shot came from the grassy knoll, but they knew that a more-refined analysis was needed to confirm this and to prove that the two TSBD matches were false matches.

The BBN scientists knew that the locations of the microphones in the test-firing caused false alarms/false matches because they did not know the exact location of the motorcycle in a given 18-foot interval:

Quote
The correlation detector produced several false alarms that could be identified as such. These false alarms are spurious matches caused by uncertainty of the exact motorcycle position with respect to the known positions of microphones used in the reconstruction test. . . . (8 HSCA 7)

The BBN scientists suspected that if they had used more microphones so that the microphones had been closer to each other, the two TSBD matches on the third shot would not have occurred.

* WA realized that the problem was that the microphones in the test firing were spaced 18 feet apart. The 18-foot spacing was the reason that BBN applied a 6-millisecond acceptance window when determining matches, since, as BBN explained, they could not be certain where the motorcycle was in a given 18-foot interval:

Quote
Because of the spacing of the microphones and lack of knowledge of the precise position of the motorcycle within the motorcade, it was fudged that the motorcycle would, in the worst case, have been no more than 18 ft away from a microphone location. The most likely separations were accounted for . . . by the establishing of a ±6-msec acceptance window for matching echo and impulse patterns. (8 HSCA 97)

* As WA explained in their testimony, they did not need to do another test firing in Dealey Plaza to solve the microphone-spacing problem. They knew they could do a computerized sonar analysis that would duplicate the conditions of closer microphone spacing--1 foot apart instead of 18 feet apart--and the resulting echo patterns. They wrote a sonar analysis program that simulated an echo pattern for 180 locations surrounding the location of the test microphone that gave the best match for the third dictabelt impulse pattern, i.e., the grassy knoll shot.

WA had written similar sonar analysis programs for the U.S. Navy—that was one of the reasons the Acoustical Society of America recommended them to the HSCA.

* Significantly, the sonar analysis enabled WA to reduce the acceptance window for a match from 6 milliseconds down to 1 millisecond, a 500% narrower window, which vastly reduced the possibility of a false match. 1 millisecond is one one-thousandth of a second. To be counted as a match, a dictabelt impulse and a test-shot echo pattern had to correspond to each other within the incredibly short timeframe of 1 millisecond.

WA also applied a noise threshold to further distinguish between non-gunfire noise and gunfire impulses.

* When WA conducted the solar analysis, they found that the dictabelt grassy knoll shot was a nearly perfect match for a test shot from the grassy knoll, which was fired from a position 8 feet west of the corner of the picket fence on the grassy knoll.

In the first sonar analysis comparison, done without the noise threshold, WA found that when the muzzle blast of the test shot was aligned with the first large impulse of the 145.15 shot—the grassy knoll shot—all 26 echoes of the test shot occurred within 1 millisecond of a corresponding impulse of the 145.15 shot.

In the second sonar analysis comparison, WA found that when they applied the noise threshold, the grassy knoll shot had 14 large impulses compared to the 12 large impulses of the test-shot pattern. Crucially, 10 of the 12 impulses in the test shot matched impulses in the grassy knoll shot to within 1 millisecond. This is an astounding correlation. Dr. Weiss explained:

Quote
Now we didn't want to include anything that might be noise in this comparison; we wanted to deal only with things of which we could be reasonably certain. So we excluded from the consideration anything which was at the noise level itself. If we knew it was below that level, then it was more probably noise than anything else, we excluded it. We wanted to know: do those things that excessed this noise level match? Well, if so, how many are there, how many do we expect to find, and how many are matched?

The answer to those three points is that there are a total of some 14 of these greater-than-noise-level peaks observed; there are a total of 10 of them that, in fact, correspond very closely to echo paths that we have been able to predict [in the simulated echo patterns].

Now our predictions also show that we should have had 12 larger-than-noise-level peaks present; but if you take these numbers and put it in an equation or formula known as the binary correlation formula, you get a number, known as a binary correlation coefficient, of .77, which says, in effect, that this pattern matches, is matched by a corresponding pattern of strong echoes with a coefficient of .77.

If you take that now and you say, well, what is the probability that this is noise, that it is just an accident that these impulses happened to fall into this sequence of spacings, the answer that you get then is that the probability that this is noise is less than 5 percent. (5 HSCA 570)

* Dr. Barger explained the importance of the WA analysis:

Quote
Mr. WOLF . In your testimony on September 11, addressing particularly the third impulse in the Dallas Police dispatch tape, you stated that the probability of this being a shot from the grassy knoll was 50-50. Professor Weiss and Mr. Aschkenasy, today, whose testimony you heard, stated that the probability of this being a shot from the grassy knoll was 95 percent or better. You have reviewed the work of Professor Weiss and Mr. Aschkenasy. Do you agree with their assessment?

Dr. BARGER. Yes; once we checked their procedures, their parameters and their echo-producing objects, we received from them the results of their match. Drs. Kalikow, Rhyne, and Mr. Schmidt and I, at Bolt, Beranek, and Newman, reviewed their results, and we concluded that they had successfully achieved a match having a correlation coefficient of .77, and you remember that was the number I was using of goodness of match. We also found that they had done this with only a plus or minus one one-thousandth of a second  error for each match, whereas we had used a plus or minus six one-thousandths of a second error window, if you will, or acceptance window as Professor Weiss called it, in order to achieve our matches.

Now, the reason that we used the large acceptance window of six one-thousandths of a second was because we didn't know, as I said, exactly where the motorcycle was. The reason they were able to lower theirs to one one-thousandth of a second was because they found exactly where it was by the procedure they described this morning. The effect of reducing this acceptance window is to greatly reduce the likelihood that noise bursts that occur could mimic the fingerprint of a shot from any place and received at that microphone. It reduces it very substantially. In other words, in the terminology that I used last time, their ability to achieve this match within plus or minus one one-thousandth of 1 second reduces the false alarm rate substantially. In other words, we had a large false alarm rate because we had a large acceptance window because we didn't know exactly where the motorcycle was. That gave us a large false alarm rate. They corrected that problem by lowering the acceptance window.

There is another feature of that score besides the acceptance window that is important. That is the value of the correlation coefficient achieved. As I said, we would not accept as a potential match any correlation coefficient that was less than one-half. But we didn't require it to be one, either, which is what it would be if there was no noise. Noise is the thing that causes the correlation coefficient to be less than one. Noise is on the Dallas Police recording.

Professors Weiss and Aschkenasy did nothing to reduce the noise, so I would not have expected they would have increased the correlation coefficient. In fact, they accepted more noise than we did, and that could have affected the correlation coefficient, which should have gone down. So their correlation coefficient, while high, was not unity. On the other hand, the false alarm rate one would expect from their match, which was so tight, this would make the likelihood of random noise bursts to fit all 10 of those to within plus or minus one one-thousandth very small.

Mr. WOLF. Your ability to state with 95-percent certainty, now, what was only a 50-50-percent probability in September was, in essence, due to the narrowing of the match time from six one-thousandths of a second to one one-thousandth of a second. Is that, in essence, correct?

Dr. BARGER. Yes, sir. After looking at what they had done, and the fact they had maintained a high correlation coefficient while reducing the acceptance window, resulted in our independent calculation of the expectancy that they could have achieved the match they got only 5 percent of the time by random if it had just been noise on the tape and not a gunshot from that place. That is why we stated independently, although their number was quite similar to ours, that we felt that the likelihood of there having been a gun shot from that knoll and received at that point now to be about 95 percent or possibly better. (5 HSCA 673-674)

* Actually, due to the fact that the two groups of HSCA acoustical experts worked separately, a math error arose in the calculation of the odds relating to the grassy knoll shot. The probability that the grassy knoll shot was the result of random noise was computed to be less than 5%, or less than 1 in 20, based mainly on a miscalculation of the value of p in the formula. The actual odds are even lower than WA calculated. Dr. Donald Thomas has demonstrated that they are actually only 1 in 100,000, or 100,000 to 1 against (http://jfklancer.com/pdf/Thomas.pdf). Put into percentage terms, the probability that chance produced the grassy knoll shot is 0.001%. To put it another way, the probability that the grassy knoll shot is a gunshot is 99.999%.

Interestingly, in their report, WA pointed out that they had been conservative in calculating the odds that chance had produced the matches between the grassy knoll shot's impulses and the test shot's echo patterns, and that the odds that chance had caused such a high degree of correlation were "considerably less than 5 percent":

Quote
The high degree of correlation between the impulse and echo sequences does not preclude the possibility that the impulses were not the sounds of a gunshot. It is conceivable that a sequence of impulse sounds, derived from non-gunshot sources, was generated with time spacings that, by chance, corresponded within one one-thousandth of a second to those of echoes of a gunshot fired from the grassy knoll. However, the probability of such a chance occurrence is about 5 percent. This calculation represents a highly conservative point of view, since it assumes that impulses can occur only in the two intervals in which echoes were observed to occur, these being the echo-delay range from 0 to 85 milliseconds and the range from 275 to 370 milliseconds. However, if the impulses in the DPD recording were not the echoes of a gunshot, they could also have occurred in the 190-millisecond timespan that separated these two intervals. Taking this timespan into account, the probability becomes considerably less than 5 percent that the match between the recorded impulses and the predicted echoes occurred by chance. (8 HSCA 32)

* Revealingly, the NRC panel recognized that WA had assigned the wrong value for p in their calculations; however, the panel not only failed to tell their readers that WA had overestimated the odds that the grassy knoll shot was random noise, but they used erroneous assumptions in their own calculations to make it seem like there was a 22% chance that the grassy knoll shot was random noise.

Yes, in so doing, the NRC panel was admitting there was a 78% chance that the grassy knoll shot was a gunshot, but 78% is quite a bit lower than 95%+, and far lower than 99.999%.

« Last Edit: October 02, 2020, 01:52:17 PM by Michael T. Griffith »