Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - EgonOlsen

#76
Support / Re: Lighting on merged Object3Ds
December 13, 2021, 06:04:51 PM
Honestly, I'm not sure what I'm supposed to see on these images!? The last one (the one with the animation) doesn't work for me on the desktop. I can view it on my phone though, but all I'm seeing are two bouncing planes...I'm not sure what to look out for, they look fine (albeit highly compressed) to me.

Anyway, the lighting calculation done by the default shaders doesn't depend on the number of texture stages. It just multiplies the calculated light color/intensity with the pixel after all texture stages have been blended together. If one texture is very dark or the blending mode is some unusual one, the results might not be what one expects, but I don't think that this is the case here.

Looking at your code, you:


  • create the plane/object
  • compile it (not needed if you merge them anyway)
  • strip it (not sure, if that's a good idea to this before calling build(), but if you merge later, it's not needed anyway)
  • build it. This can be done here, but actually doesn't have to.

I suggest to do this instead:


  • create the plane
  • merge
  • build()
  • compile (if needed, actually I don't see why it should)
  • strip()

I don't think that this will change something but it's actually the way it's intended to be done and maybe your way has some effect on something that I don't see ATM.
#77
Support / Re: Lighting on merged Object3Ds
December 11, 2021, 07:40:22 PM
Actually, if the normals have been calculated (or loaded from file with Config.useNormalsFromOBJ=true) already before merging, they won't be recalculated after the merge when you call build on the merged object. I'm not sure if your issue is related to merging at all, though. I rather think it's caused by normal vectors that aren't calculated in a why that fits your scene. Does the file from which these objects come includes normals? In that case, you could try the config switch mentioned above.
#78
Support / Re: jPCT(-AE) origins
December 10, 2021, 03:35:20 PM
I'll post a lengthy reply once I find the time...
#79
Support / Re: Lighting on merged Object3Ds
December 10, 2021, 03:34:44 PM
Each object will be lit by itself, no matter how they overlap with each other. Regarding the lighting, they are completely independant. Maybe it's just caused by way in which the normals are created? A vertex normal is calculated by taking the face normals of the adjacent polygons and averaging them. If you, for example, stack boxes onto another the normals on the lower corners will face something like 45° down while the ones on the upper corners will face 45° up. That will cause uneven lighting on something like a wall that consists of these boxes.

#80
Support / Re: ShadowHelper and special Textures
December 07, 2021, 10:43:46 AM
Yes, the shadow is an additional texture as mentioned above. I wasn't aware that you are adding 2 layers yourself, so that explains it.
#81
Support / Re: ShadowHelper and special Textures
December 07, 2021, 08:37:02 AM
Can you please set the Logger to LL_DEBUG and post what gets printed out when adding the sphere as a receiver?
#82
Support / Re: ShadowHelper and special Textures
December 06, 2021, 08:57:20 PM
Quote from: Windmiller on December 04, 2021, 01:49:29 PM
Is these two later methods overwriting the shadow map? What is going on here?

Yes, that's the problem here. Shadow mapping is two pass operation. Firstly, the scene when rendered when viewed from the light source and the depth of each (caster-)pixel is stored into the depth map. Then, the scene is rendered normally expect that the shadow mapp gets projected into the scene in addition and a check is done to compare the distance from each pixel to the light source with the value stored in the depth map. If it's larger, the pixel is shadowed. Otherwise, it's not. For that to work, the shadow map has to be part of each object receiving the shadow.

In your case, you are removing the shadow map from the object with that code, hence no shadows. If you call your method before setting up the shadow helper, it should actually work.
#83
Support / Re: Lighting on merged Object3Ds
December 03, 2021, 07:14:31 AM
The merging does nothing to the normals nor does an overlap, at least not on itself. Are these meshes sharing coordinates? You might want to try to call build() on the objects before merging them and not afterwards to see if that helps.
#84
Yes, that's because the object gets textured based on its vertices' positions in space, not on its previous texture coordinates. Assuming that the sphere itself has proper coordinates, you shold get them via https://www.jpct.net/jpct-ae/doc/com/threed/jpct/PolygonManager.html#getTextureUV-int-int- and then do something like


TextureInfo ti = new TextureInfo(tid, u0, v0, u1, v1, u2, v2);


instead of


TextureInfo ti = new TextureInfo(tid, v0.x / dx, v0.z / dz, v1.x / dx, v1.z / dz, v2.x / dx, v2.z / dz);


For splatting, just multiple these coodinates with some value and assign them to another layer.

#85
Quote from: AeroShark333 on November 24, 2021, 02:10:10 PM
I thought the renderhook solution should have worked as well, but apparently not..?
It should have. Especially in combination with the sort offset, but I guess something interfered with the sorting in some way so that the overlay didn't render at the end, as I though it would. That might happen, the sorting is just a heuristic anyway.
#86
This is the code of setDepth():


public void setDepth(float depth) {
if (depth < Config.nearPlane) {
depth = Config.nearPlane + 1f;
}
this.depth = depth;
}


As you can see, setting it to -infinity has no effect other than setting the depth to Config.nearPlane + 1f

I'm puzzled, why all of my suggestions have no effect. At least the sorting should have helped. Maybe the issue is something other than I think it is...

Another idea: Split scene and overlay in two different world instances and render them one after the other with a z-buffer clear in between...
#87
This is the code of setDepth():


public void setDepth(float depth) {
if (depth < Config.nearPlane) {
depth = Config.nearPlane + 1f;
}
this.depth = depth;
}


As you can see, setting it to -infinity has no effect other than setting the depth to Config.nearPlane + 1f

I'm puzzled, why all of my suggestions have no effect. At least the sorting should have helped.

Another idea: Split scene and overlay in two different world instances and render them one after the other with a z-buffer clear in between...
#88
If you set Config.glIgnoreNearPlane, the value set for the near plane won't be used. The default (1) will be used instead. If you combine that with your overlay and the fact that you've set the near plane to 0 (which usually is too close btw), you will end up with a near plane of 1 for the scene and a depth of (0+1) for the overlay as well. I'm surprised that it shows up at all.

Try to set Config.glIgnoreNearPlane to false and see if that changes anything. And, as said, 0 as the near plane is too close. Try some higher value instead.

That said, if the overlay is transparent it won't write into the depth buffer, which can lead to objects drawn later to be painted over it. If you set it to opaque, the depth buffer will handle the overdraw and in turn avoid the problem. You can try to change the sort order of the overlay to make it render after everything else:


  • use Oyerlay.getObject3D() to get the internal Object3D from the overlay
  • call setZSortOffset(-xxxx) with xxxx being a large value on the Object3D of the overlay

That should, in theory, change the overlays sorting so that is will be rendered after everything else.
#89
Quote from: AeroShark333 on November 18, 2021, 03:08:15 PM
I tried this but it didn't seem to make a difference unfortunately... Any other suggestions?
I believe Overlay#setDepth(-9E30f) sort of works best for now but still not perfectly

That's a bit strange. If the overlay is being rendered last (which should be the case) but you are clearing the depth buffer before rendering, there has to paint over the entire scene no matter what. Have you checked that your implementation gets actually called? The overlay is transparent, I assume? Is there any chance that the object interfering with it is as well?

About the depth thing...that value can't do anything useful, to be honest. A negative value would move the overlay behind the clipping plane, which makes mathematically no sense. Therefor, the method checks, if the value given to it is smaller than the value in Config.nearPlane and if it is, it will auto-adjust it to Config.nearPlane+1. So you can go smaller than Config.nearPlane+1, but you can't go smaller than the actual value in Config.nearPlane. Have you tried playing around with Config.nearPlane itself?

If nothing helps, can you create a test case that shows the problem?

Quote from: AeroShark333 on November 18, 2021, 03:08:15 PM
About the enabling/disabling, did you mean Object3D#setVisibility(boolean), World.removeObject(Object3D) or something else? (I assume these are basically the same anyway...)
Does this need to be actively done for every frame (or camera update)? ??? I thought jPCT took care of this automatically based on what's visible inside the camera's view...
Depends on your implementation, I guess. Yes, jPCT takes cares of objects that aren't visible, but it's still faster to disable them in the first place if you are sure that they aren't visible in the current scene. For that, setVisibility() is usually the better option than adding/removing them. In Naroth, it's a bit different. The maps are stored as simple ASCII maps. At load time, I parse the maps and convert the ASCII data into matching beans that contain information about their position, orientation and apperance (a wall, a door, a chest...). When the player moves, I iterate through these objects to see, which ones are potentially visible (the check is rather rough, but good enough). For all that are visible, I assign an actual Object3D to them that gets positioned in the world so that it matches the data in the bean. So basically, I have a pool of Object3D instances which dynamically form the scene. If visualized, this would look like a scene out of a sci-fi movie where the world builds itself as you walk along while it collapses behind you.
#90
Quote from: AeroShark333 on November 18, 2021, 12:56:35 PM
Since I assume you don't want to do the second option... Here's what I'd suggest still:
I think you should be able to retrieve UV coordinates from an Object3D: https://www.jpct.net/jpct-ae/doc/com/threed/jpct/PolygonManager.html#getTextureUV-int-int-
I am missing a setTextureUV() method here though...
There is one, just not in the way in which you expect it to be. What you have to do is to grab the information you need from the various methods in the PolygonManager, create a TextureInfo-Instance from these and assign it to the polygon by using setPolygonTexture(). This code from the wiki has an example of that in the setTexture()-method where I'm tiling a terrain texture: https://www.jpct.net/wiki/index.php?title=Texture_splatting_on_a_terrain

Quote
In the end, I believe https://www.jpct.net/jpct-ae/doc/com/threed/jpct/Object3D.html#setTextureMatrix-com.threed.jpct.Matrix- might be your best bet for scaling/offsetting UV coordinates
That should work as well, I think...