Page 7 of 9 FirstFirst ... 56789 LastLast
Results 61 to 70 of 86

Thread: Aimpoint T-1 POI shift?? (Green Eye Tactical Bans T1 RDS)

  1. #61
    Join Date
    Sep 2013
    Location
    Dallas, Texas
    Posts
    140
    Feedback Score
    0
    Quote Originally Posted by Naphtali View Post
    This is a non-issue. I own or have owned multiple EoTech models, as well as multiple Aimpoint T2s and Comp M4s's. ALL OF THEM DO THIS ABOUT THE SAME.
    First- this statement is incorrect. I'm not sure that you actually read the report that I linked previously- but here you go:
    https://www.greeneyetactical.com/201...ight-parallax/

    The report was 84 pages or so- as a result I used a simple converter to html to get it on the website- some of the formatting or images may look funky on some browsers. If that's an issue, you can view it in PDF here:
    https://www.dropbox.com/s/5zgsq2kq6j...rt%20.pdf?dl=0

    You'll find that the data does not support the conclusion you stated above. However- I encourage you or anyone who thinks that the test results are incorrect to replicate it on their own and post their results to disprove it. You can find the test procedures, forms and a downloadable calibration target here:
    https://www.dropbox.com/sh/obpot49wn...XjxXthzba?dl=0

    Quote Originally Posted by Naphtali View Post
    As someone said previously - if you ban the T1, you have to ban every RDS on the market now for doing the same thing to basically the same degree.
    So, I'm feeling like a broken record here because this has been previously discussed and explained. Based off the above- this is also a no. This was one optic for one course, based off of extensive observed performance- which the test data supports.
    Eric
    Owner/Instructor
    Green Eye Tactical
    Www.greeneyetactical.com

  2. #62
    Join Date
    Nov 2014
    Posts
    150
    Feedback Score
    0
    Quote Originally Posted by Green Eye Tactical View Post
    First- this statement is incorrect. I'm not sure that you actually read the report that I linked previously- but here you go:
    https://www.greeneyetactical.com/201...ight-parallax/

    The report was 84 pages or so- as a result I used a simple converter to html to get it on the website- some of the formatting or images may look funky on some browsers. If that's an issue, you can view it in PDF here:
    https://www.dropbox.com/s/5zgsq2kq6j...rt%20.pdf?dl=0

    You'll find that the data does not support the conclusion you stated above. However- I encourage you or anyone who thinks that the test results are incorrect to replicate it on their own and post their results to disprove it. You can find the test procedures, forms and a downloadable calibration target here:
    https://www.dropbox.com/sh/obpot49wn...XjxXthzba?dl=0



    So, I'm feeling like a broken record here because this has been previously discussed and explained. Based off the above- this is also a no. This was one optic for one course, based off of extensive observed performance- which the test data supports.
    Eric,

    Your study appears to be excellent at showing the POA shift that occurs at the extremes of horizontal and vertical deviation only. I'd say the greatest utility would be to measure at less than that - say at 25% and 50% and 75% deviation. I doubt I've ever accidentally shot with the red dot at the outer edge of the optic. A much more difficulty study to perform, to be sure - you'd have to have it photographed and use computer analysis to guarantee you're actually measuring what you intend to measure - but it would avoid making a major assumption made in your study that might be false. And I think this is the real study you wanted to do.

    The assumption of your study is that if sight X has more POA shift at the outer edge (extreme deviation of the red dot) compared to sight Y, then X must have more POA shift at less-than-maximum off-center shifts of the red dot. That could easily be false. It's possible a sight may optically have a terrible outer 10% deviation if the red dot is that far to the side, but within the central 90%, it may be nearly perfect. Or the central 50% may be near perfect.

    Example - this is made up data just to make my point:

    % variation of the red dot for 2 sights

    Eotech XPS 3.0
    0 shift (red dot is dead center) = 0 MOA (POA is true)
    20 % shift = 1.0 MOA shift
    40 % shift = 1.4 MOA shift
    60% shift = 1.6 MOA shift
    80% shift = 1.7 MOA shift
    100% shift (red dot is at the outer edge) = 1.9 MOA shift

    Aimpoint T-1
    0 shift (red dot is dead center) = 0 MOA (POA is true)
    20 % shift = 0.3 MOA shift
    40 % shift = 0.5 MOA shift
    60% shift = 0.7 MOA shift
    80% shift = 1.7 MOA shift
    100% shift (red dot is at the outer edge) = 9.0 MOA shift


    If the above data were true - and for all your study knows, it could be true, because you did not analyze red dot shifts under 100% - then your study would conclude that the EoTech is vastly superior. However, the T-1 would be clearly superior overall, because you're going to be <80% from center most of the time. Again, made-up data by me, but certainly possible, and your study does not disprove (or even address) this possibility. So the limited scope of your study would lead you to a dramatically incorrect conclusion about the overall quality / utility of one sight vs the other.

    I strongly doubt the POA deviation is linear as you go out from center. I would bet it's more of an exponential curve, which means most of the deviation happens near the outer edge.

    Since you use your study to assess the utility of the sights, and the sight utility really cannot be judged without the graph of % deviation of the red dot vs POA shift, I don't think your study establishes what you're really concluding overall. Your study establishes only how good these sights are if the red dot is at the outer edge. It establishes nothing about the quality of these sights if a shot is taken with the red dot anywhere inside that outer edge. And you're shooting at these other locations for 99% of shots (I would bet). You really, really need the curves of the POA shift that occurs as the red dot moves progressively out from center in order to make a claim about overall quality / utility.

    Don't get me wrong - it's a great study for answering the one question it asked (other than human measurer error, but you'd need serious technology, that I doubt you have access to, to have done it any other way) - you just need funding and manpower and cameras / computer analysis to do the much more massive complete study above.
    Last edited by Naphtali; 09-07-17 at 21:47.

  3. #63
    Join Date
    Sep 2013
    Location
    Dallas, Texas
    Posts
    140
    Feedback Score
    0
    Quote Originally Posted by Naphtali View Post
    Eric,

    Your study appears to be excellent at showing the POA shift that occurs at the extremes of horizontal and vertical deviation only. (snip)
    That's a long post and very detailed- but it is based on a lot of assumptions.

    If you look through it again- the measurement is not the extreme measurement only. It is the range of movement from center to the edge. The tester then draws a diagram. I recieved ZERO diagrams that were linear.

    Your math also fails to account for the possible degree of deviation that the two optics you cited are capabale of. The extreme angle of the EoTech is far more off center than the T-1 does.

    What isn't shown in the data is how much of that movement occurs in the center 50% of the AVAILABLE window of view. The T-1 starts to move right away, while the MRO, T-2, Eotech and LCO doesn't start really movining until later in deflection.

    This test served the original purposes that I laid out in the testing protocol peer review thread on TOS. If you'd like to submit a study with your finding that prove your theory- I would greatly look forward to it.

    Also- I did put out a quick video that show how not linear the range of movement is. You also see some optics starting to move at a greater degree in the first 25% off center. It also shows the restriction of angle of view on the tubed optics vs the windowed ones.

    Last edited by Green Eye Tactical; 09-07-17 at 21:57.
    Eric
    Owner/Instructor
    Green Eye Tactical
    Www.greeneyetactical.com

  4. #64
    Join Date
    Nov 2014
    Posts
    150
    Feedback Score
    0
    Quote Originally Posted by Green Eye Tactical View Post

    What isn't shown in the data is how much of that movement occurs in the center 50% of the AVAILABLE window of view. The T-1 starts to move right away, while the MRO, T-2, Eotech and LCO doesn't start really movining until later in deflection.
    That's the data you'd need to provide in order to generate the graphs from which overall utility can be judged. I would absolutely love to see those graphs if you can generate them from your data. And I would imagine many others would have a great interest. 0% to 75% is probably where virtually all shots take place.

    I was not saying your original study was wrong - just that you could not know whether it was correct without that additional sub-100% deviation data. Your agreement that the relationship is not linear (as observed subjectively by your study in the video) supports the need for that objective data. I would really like to see how my T-2s and Comp M4s's compare to each other and to other manufacturers at those lesser deviations.

    No one's ever published that type of study to my knowledge, and it would be extremely valuable to consumers.
    Last edited by Naphtali; 09-07-17 at 22:23.

  5. #65
    Join Date
    Sep 2013
    Location
    Dallas, Texas
    Posts
    140
    Feedback Score
    0
    Quote Originally Posted by Naphtali View Post
    That's the data you'd need to provide in order to generate the graphs from which overall utility can be judged. I would absolutely love to see those graphs if you can generate them from your data. And I would imagine many others would have a great interest. 0% to 75% is probably where virtually all shots take place.

    I was not saying your original study was wrong - just that you could not know whether it was correct without that additional sub-100% deviation data. Your agreement that the relationship is not linear (as observed subjectively by your study in the video) supports the need for that objective data. I would really like to see how my T-2s and Comp M4s's compare to each other and to other manufacturers at those lesser deviations.

    No one's ever published that type of study to my knowledge, and it would be extremely valuable to consumers.
    So, I do agree with you here. I went back and forth about how to do this test and at first I was going to use a camera on a mount behing the optic that allowed for measured x/y movement. I went a different direction since the impetus for this test was user induced error. So, eliminating the user didn't make sense. I also wanted to make the test easily repeatable for anyone in the community- in the hopes that others would try to replicate it. This was more of a first step, and it confirmed a few things: that none of these optics are "parallax free", distance didn't affect parallax as it was generally thought, all brands/models were not the same even remotely with regard to this error, and the majrity of the optics did not display regular parallax movement. The next logical step would be to test to see to what extent and degree this effect occurs at with the optics. I'll look forward to seeing anyone's results from a more refined test. It actually sounds like you have some interest here- are you looking to take that on?
    Eric
    Owner/Instructor
    Green Eye Tactical
    Www.greeneyetactical.com

  6. #66
    Join Date
    Nov 2014
    Posts
    150
    Feedback Score
    0
    Quote Originally Posted by Green Eye Tactical View Post
    So, I do agree with you here. I went back and forth about how to do this test and at first I was going to use a camera on a mount behing the optic that allowed for measured x/y movement. I went a different direction since the impetus for this test was user induced error. So, eliminating the user didn't make sense. I also wanted to make the test easily repeatable for anyone in the community- in the hopes that others would try to replicate it. This was more of a first step, and it confirmed a few things: that none of these optics are "parallax free", distance didn't affect parallax as it was generally thought, all brands/models were not the same even remotely with regard to this error, and the majrity of the optics did not display regular parallax movement. The next logical step would be to test to see to what extent and degree this effect occurs at with the optics. I'll look forward to seeing anyone's results from a more refined test. It actually sounds like you have some interest here- are you looking to take that on?
    I don't have the technology or time (or the variety of sights any more) to perform that study. From a comparability to medical studies standpoint, I could advise on the methods to minimize error / bias, if someone were prepared to take a project of that magnitude on. I can't stress enough how valuable it would be.

  7. #67
    Join Date
    Sep 2013
    Location
    Dallas, Texas
    Posts
    140
    Feedback Score
    0
    Quote Originally Posted by Naphtali View Post
    I don't have the technology or time (or the variety of sights any more) to perform that study. From a comparability to medical studies standpoint, I could advise on the methods to minimize error / bias, if someone were prepared to take a project of that magnitude on. I can't stress enough how valuable it would be.
    It would be very valuable. From my experience on how time and effort intensive just the simple one I did- I think that the best model forward is a crowd-sourced type test. The trick is making the controls simple enough that anyone can do them. Obviously the quality of the input data in that model is a concern, however I am a huge proponent of encouraging the community to get more involved in data collection. It wouldn't be perfect in the beginning, but I think it could have a "nudge" effect to get the industry to slowly start shifting to that kind of a data driven evaluation mindset. This was a secondary goal of the testing model I chose. It mirrors what you see in the medical/scientific community with regards to the scientific method and peer review. Submit your test, protocols/procedures, results and data for the community- then get them to attempt to replicate it to disprove it. It also helps increase the available pool of optics. I'm not a huge fan of contacting manufacturers to get them to send you T&E models due to the risk of having them "hand pick" them.
    Eric
    Owner/Instructor
    Green Eye Tactical
    Www.greeneyetactical.com

  8. #68
    Join Date
    Jun 2006
    Location
    CONUS
    Posts
    4,208
    Feedback Score
    6 (100%)
    As a consumer, I don't think that another study with sub-100% data is needed. To me, the original study done by Eric et al is enough to bring awareness to a potential issue that can arise with the use of a RDS and also bounds the worst-case effects of parallax based on off-center sighting. The individual can use Eric's test protocol to perform their own evaluation of their optic for anything in-between.

    The main points I got from the study were:
    • RDS optics advertised as parallex-free are most likely not
    • There is variation among the same model of RDS
    • There is variation between testers looking through the same RDS
    • The magnitude of movement is not linear, and the direction can vary


    Based on that study, I can use the data to determine whether the movements are going to affect me when I shoot. Depending on the usage, maybe, maybe not. So, for the person deciding on which RDS to choose, they can use Eric's data as a point of consideration (besides battery life, personal preference, weight etc etc). Then, it should be up to the individual to perform their own simple test to see what movement their particular model and unit exhibits over the window of view that they're concerned with.

    The value that I see that Eric's study brings to the shooting community is again, awareness. The other is a well-described testing protocol that an individual (or company etc) can use to get their own data, which I hope would lead to some kind of industry standard for spec'ing parallax. I was going to use flashlight brightness expressed in lumens as an example, but that's another can of worms.

    What I hope to eventually see is manufacturers acknowledging that RDS optics are not parallex free, and to include, as part of their technical specifications the expected deviations in MOA based on distance from center of window (probably based on the percentage of the horizontal/ vertical distance from center to edge). With that information, consumers would be able to adjust their shooting style as need be, and know what kind of deviations to expect under certain circumstances. Kind of like taking sight-over-bore offset into consideration and aiming high at 7 yds.

  9. #69
    Join Date
    May 2009
    Posts
    1,797
    Feedback Score
    7 (100%)
    I was under the impression they are advertised "parallax-free at xx yards".


    Sent from my iPhone using Tapatalk

  10. #70
    Join Date
    Sep 2013
    Location
    Dallas, Texas
    Posts
    140
    Feedback Score
    0
    Quote Originally Posted by tylerw02 View Post
    I was under the impression they are advertised "parallax-free at xx yards".


    Sent from my iPhone using Tapatalk
    There is a section in the report where I quote each manufacturer's claims and reference it to the results.
    Eric
    Owner/Instructor
    Green Eye Tactical
    Www.greeneyetactical.com

Page 7 of 9 FirstFirst ... 56789 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •