Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dep version requirements: research-app 0.7, reproject 0.8 #313

Merged
merged 7 commits into from
Sep 24, 2021

Conversation

pkgw
Copy link
Contributor

@pkgw pkgw commented Sep 21, 2021

Overview

In preparation for a new release, bump some dependency version requirements.

Checklist

N/A

…ually passes

This will make it easier to isolate image fails in the artifacts.
Current macOS has some outputs that look like we need even longer timeouts, but I'd like
to have some more data about the timing before trying to bump things up.
@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

@Carifio24 Here's a mystery for you. The Windows and macOS builds are now failing with, among other things, Qt tests where the imagery seems to be shifted upwards from where it should be. (There are other failures, but they seem to be more reasonable.) I can't reproduce locally with a build on my Windows partition. Any ideas about why we might be getting this failure mode? I've been trying to see whether using true OpenGL vs. software rasterization makes a difference (that's one thing that distinguishes the Windows and macOS Azure Pipelines builds), but that's its own can of worms ...

@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

If I'm analyzing this correctly, the strange vertical offsets don't happen when using the 0.6 version of the research app, and they don't happen when using Qt 5.15 on Windows (vs. Conda's 5.12), and they don't happen when using real GL as opposed to the software renderer.

@pkgw pkgw force-pushed the bump-app branch 2 times, most recently from 966a90d to 773ef0f Compare September 22, 2021 20:07
@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

OK, I tried changing the CI scripts to use Qt 5.15 (by installing with pip), and on macOS the vertical offset is still there, so that's not helpful. (On Windows, other issues cropped up.)

@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

When I temporarily changed the package to use version 0.6 of the research app, the problem was fixed. (That's Pipelines build 20210922.2).

@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

On my Windows machine, if I resize the physical WWT window, the offset seems to be stable regardless of what I change, so it seems like the view center is really shifted, and it's probably not an issue of the output bitmap getting cropped oddly or something.

@pkgw
Copy link
Contributor Author

pkgw commented Sep 22, 2021

In the image_layer_equ test, the angular offset is about 2.8 degrees, compared to the image height of 10 degrees.

update: In image_layer_gal, it's about 3.7 degrees. Here the viewport height is set to 14 degrees, as opposed to ~~18 for image_layer_equ. Hmm.

@pkgw
Copy link
Contributor Author

pkgw commented Sep 23, 2021

I got it to the point where I could bisect on different versions of the engine on my Windows partition. Not too surprisingly, the culprit seems to be somewhere in WorldWideTelescope/wwt-webgl-engine#115, the tiled FITS pull request.

@pkgw
Copy link
Contributor Author

pkgw commented Sep 23, 2021

@imbasimba This pull request upgrades pywwt to use the latest version of the WWT research app and engine, which turns out to cause new failures in the CI test suite.

I believe that I've traced this down to the WebGL 1 fallback code: e.g., the pywwt test image_layer_equ passes with the latest engine when WebGL2 is enabled, but fails when only WebGL 1 is used with the image being drawn too far up (i.e., at too high of a declination on the sky). The failures in the CI suite here are on the platforms where we have to use software GL rasterization that apparently triggers the WebGL 1 fallback.

Would you be able to look into this and see if you can confirm my analysis? This version of pywwt isn't using the tiled fits functionality, so it would be something with the WebGL1 approach to SkyImage FITS images.

update: OK, yeah, if I hack my pywwt to use a local copy of the research-app/engine, and hack that local copy to always set UseGlVersion2 to False, I can reproduce the offset positioning on my Linux machine.

@imbasimba
Copy link
Member

Hmm... I'll look into this

@imbasimba
Copy link
Member

Here is the little thing I messed up in the SkyImage/TangentTile class merge.
WorldWideTelescope/wwt-webgl-engine#147

@pkgw
Copy link
Contributor Author

pkgw commented Sep 24, 2021

Thanks for investigating! My local testing looks like it confirms that you've found the problem.

This gets the WebGL 1 FITS SkyImage positioning fix that should position
us for a passing CI. We'll still need to update the reference image
corpus, but hopefully this should solve the egregious failures.
The new WebGL2 FITS rendering code and imageset zoom setup alters some
of our expected outputs.

Note in particular that in the `image_layer_gal` test, the WebGL1 and
WebGL2 renderers produce fairly different opacity effects. As best I can
tell, they are both behaving as intended, it's just that the shaders
(TileShader and FitsShader) approach opacity in different ways. It might
be possible to harmonize them, but it doesn't seem worth the effort
right now.
And document that whole setup in the README.
@pkgw
Copy link
Contributor Author

pkgw commented Sep 24, 2021

@imbasimba This pull request now adds some documentation in the README with a bit more info about how the whole image-test subsystem works.

@imbasimba
Copy link
Member

I found the reason for a lot of the confusion and conflicting results and it is no wonder it was so difficult to diagnose and fix this issue! There were two separate problems interfering with my investigation. On top of the real bug there is a later commit slightly changing the zoom level WorldWideTelescope/wwt-webgl-engine@aeb5dc0 causing the tests to fail when I was working on master.

This explains so much of my conflicting results! I'm glad I'm not crazy 😃

@imbasimba
Copy link
Member

Great, thanks for the added documentation!

@pkgw
Copy link
Contributor Author

pkgw commented Sep 24, 2021

Yeah, sorry that this is all such a mess :-/ From a certain standpoint, these image tests should be maintained in the wwt-webgl-engine repo to keep them "closer" to the engine changes.

Side note: as mentioned in one of the commit messages in this PR, I noticed that the WebGL1 and WebGL2 renderers give fairly different results when opacities are used, as in the image_layer_gal test. I was wondering if maybe the WebGL1 path was doubly-applying the opacity (its 50% opacity looks much fainter than WebGL2), but as far as I can tell they just handle the opacity in different ways that lead to different "response curves".

@pkgw pkgw merged commit ab72466 into WorldWideTelescope:master Sep 24, 2021
@pkgw pkgw deleted the bump-app branch September 24, 2021 15:35
@imbasimba
Copy link
Member

From a quick look it seems like the main difference is that opacity scales rgba in TileShader while in FitsShader opacity only affects the alpha channel. I can do a quick test before our next meeting to see if that's the only reason for the discrepancy.

@pkgw
Copy link
Contributor Author

pkgw commented Sep 24, 2021

Yeah. I guess if opacity is scaling RGB and A, that would be incorrectly doubling its effect, right? Assuming that those colors aren't supposed to be premultiplied alpha.

But overall, it's not something to prioritize, I'd say.

@imbasimba
Copy link
Member

Indeed the RGB is scaled twice in all shaders except the FitsShader. Basically this blender function makes the rgb * alpha unnecessary.
https://github.com/imbasimba/wwt-webgl-engine/blob/7a010aaf8d56d25a69ce05871d9a68a489b62d75/engine/wwtlib/Graphics/Shaders.cs#L1677:L1678

Of course one could argue that there is no single correct answer when it comes to merging two images, but this seems like an honest mistake.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants