Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - majestik_m00se

Pages: [1]
1
Support / Re: nvidia quadro JRE crashes?
« on: April 05, 2012, 04:48:30 pm »
Turning off VBO has no effect, other than the stack is slightly different (see below). 
I just tried running standalone outside of Eclipse, thought it was working at first but eventually it crashed in the same manner.

I've got multiple Object3Ds, I found that I have to disable compilation on most of them to avoid the crash.  Interestingly (or maybe not), I have a bunch of OBj3D's that are created via the Loader loadMD2()/load3DS() functions that seem to be ok if I leave them as compiled.

-Ed

Code: [Select]
[error occurred during error reporting (printing native stack), id 0xc0000005]

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  org.lwjgl.opengl.GL11.nglDrawElements(IIIJJ)V+0
j  org.lwjgl.opengl.GL11.glDrawElements(ILjava/nio/IntBuffer;)V+34
j  com.threed.jpct.CompiledInstance.compileToDL()V+294
J  com.threed.jpct.GLBase.compileDLOrVBO()V
J  com.threed.jpct.AWTGLRenderer.drawVertexArray(Lcom/threed/jpct/AWTDisplayList;I)V
J  com.threed.jpct.AWTJPCTCanvas.paintGL()V
j  org.lwjgl.opengl.AWTGLCanvas.paint(Ljava/awt/Graphics;)V+172
J  sun.awt.RepaintArea.paint(Ljava/lang/Object;Z)V
j  sun.awt.windows.WComponentPeer.handleEvent(Ljava/awt/AWTEvent;)V+107
J  java.awt.Component.dispatchEventImpl(Ljava/awt/AWTEvent;)V
J  java.awt.Component.dispatchEvent(Ljava/awt/AWTEvent;)V
j  java.awt.EventQueue.dispatchEvent(Ljava/awt/AWTEvent;)V+46
J  java.awt.EventDispatchThread.pumpOneEventForFilters(I)Z
j  java.awt.EventDispatchThread.pumpEventsForFilter(ILjava/awt/Conditional;Ljava/awt/EventFilter;)V+30
j  java.awt.EventDispatchThread.pumpEventsForHierarchy(ILjava/awt/Conditional;Ljava/awt/Component;)V+11
j  java.awt.EventDispatchThread.pumpEvents(ILjava/awt/Conditional;)V+4
j  java.awt.EventDispatchThread.pumpEvents(Ljava/awt/Conditional;)V+3
j  java.awt.EventDispatchThread.run()V+9
v  ~StubRoutines::call_stub

2
Support / Re: nvidia quadro JRE crashes?
« on: April 04, 2012, 10:02:45 pm »
I have been running on an older machine with one of these nVidia Quadro FX cards, and have had similar crashes (see stack trace below). I think I may have found what is causing this, at least with my code.

stack:
Code: [Select]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  org.lwjgl.opengl.GL11.nglDrawElementsBO(IIIJJ)V+0
j  org.lwjgl.opengl.GL11.glDrawElements(IIIJ)V+28
j  com.threed.jpct.CompiledInstance.renderVBO(ZLcom/threed/jpct/IRenderHook;)V+287
j  com.threed.jpct.CompiledInstance.render(ILcom/threed/jpct/GLBase;Ljava/nio/FloatBuffer;[FZLcom/threed/jpct/Camera;[[FZ[Ljava/lang/Object;)V+2300
j  com.threed.jpct.AWTGLRenderer.drawVertexArray(Lcom/threed/jpct/AWTDisplayList;I)V+1909
J  com.threed.jpct.AWTJPCTCanvas.paintGL()V
j  org.lwjgl.opengl.AWTGLCanvas.paint(Ljava/awt/Graphics;)V+172
j  org.lwjgl.opengl.AWTGLCanvas.update(Ljava/awt/Graphics;)V+2
j  sun.awt.RepaintArea.updateComponent(Ljava/awt/Component;Ljava/awt/Graphics;)V+6
j  sun.awt.RepaintArea.paint(Ljava/lang/Object;Z)V+263
j  sun.awt.windows.WComponentPeer.handleEvent(Ljava/awt/AWTEvent;)V+107
j  java.awt.Component.dispatchEventImpl(Ljava/awt/AWTEvent;)V+849
j  java.awt.Component.dispatchEvent(Ljava/awt/AWTEvent;)V+2
j  java.awt.EventQueue.dispatchEvent(Ljava/awt/AWTEvent;)V+46
j  java.awt.EventDispatchThread.pumpOneEventForFilters(I)Z+204
j  java.awt.EventDispatchThread.pumpEventsForFilter(ILjava/awt/Conditional;Ljava/awt/EventFilter;)V+30
j  java.awt.EventDispatchThread.pumpEventsForHierarchy(ILjava/awt/Conditional;Ljava/awt/Component;)V+11
j  java.awt.EventDispatchThread.pumpEvents(ILjava/awt/Conditional;)V+4
j  java.awt.EventDispatchThread.pumpEvents(Ljava/awt/Conditional;)V+3
j  java.awt.EventDispatchThread.run()V+9
v  ~StubRoutines::call_stub

After much experimenting, I narrowed it down to some code I had which was adding a 3D sphere primitive, wrapping a spherical texture map, and then compiling the object:

Code: [Select]
a = (Object3D)Primitives.getSphere(50,astro.diameter/10+SKYDOME_DISTANCE);
a.setTexture("bluesky");
a.setEnvmapped(false);

a.setTransparency(current_atm.sky_transparency);

a.setSortOffset(10001);
a.invertCulling(true);
a.build();
a.compile();
a.setVisibility(true);
world.addObject(a);

With this code as-is, I get the crash in my draw code when I exec buffer.displayGLOnly() .
When I comment out the a.compile() , things run fine.  Not sure what the required Object3D methods are amongst the above to cause it to happen, didn't narrow it down beyond this.  Same code runs fine on the Mac, only a Quadro FX issue for me. 

Maybe not something worth looking into, but just some bread crumbs in case anyone else has a similar issue..
Ed

3
Support / avoiding depth sort /transparency issues on a merged object
« on: March 09, 2012, 08:50:21 pm »
Hi,

First of all, thanks for your help previously with my earlier navigating-a-sphere questions.  I got it to work, the help I got from Egon and others on this board was very useful!

So, now I am playing around with .3ds models, and I loaded some low-poly trees into my world. they consist of a half-dozen or so quads, with an alpha 'leaf' texture applied to them (so only the leafy part shows), and a trunk, which is another low-poly mesh but with a standard, non-alpha texture applied to it.

To this point I have been using a variant of the model loader code from the JPCT wiki, that loads the sub-objects of the .3ds file and merges them into one Object3D.    The problem I've been having is that the leaves on the tree have z-ordering issues, where they pop in front of and in back of the tree trunks as you change camera perspective etc.  I have tried various values for sort depth and transparency, but since both the leaves and trunk are merged into a single object, I don't think it really solves the problem.

I suspect I might need to keep the leaves and trunk as separate objects, and apply different sort depths and transparency settings to each, but I wanted to see what people thought.  If I do that, it will double the number of tree-related objects, I don't know if that has a perf impact if there are a large number of trees.   I assume if I did that, I'd make one the child of the other or something so rotations/translations will move everything together ? (sorry,  im a bit of a newbie to 3D engine mechanics)

  I'm also assuming that I can't get away with setting sort order and transparency on the sub-objects before I merge them (ie, an Object3D can only have one value for each of these attributes.)  Along those lines, if there is some other kind of thing I should experiment with on the merged object to get my leaves to settle down, I'm interested! :-)

Thanks!


4
Support / Re: controlling camera orientation on a sphere surface?
« on: January 26, 2012, 08:02:43 pm »
yeah I keep getting back to eliminating the sphere from the calculations.  After all, in theory it should be possible to adjust the camera so it's XZ plane is normal to a ray from the origin, whether the sphere is there or not.

since the sphere is at world 0,0,0 (at least currently), isn't the camera position essentially that vector? if so would I be able to use that calcCross() function to get the tangent vector?

also even if this works, how do I 'combine' the result of that with the other orientation information for camera x and y axis that I'm getting from the mouse look routine?

thanks again for helping me on this, i'm learning a lot about vector geometry if nothing else!
Ed

5
Support / Re: controlling camera orientation on a sphere surface?
« on: January 26, 2012, 04:34:03 pm »
well I don't know how to get a particular sphere polygon below the camera, maybe there is a way to get it seeing I constructed it. I build the sphere just by iterating through the spherical coordinates at a given r and creating verticies/triangles by converting the coords back to cartesian:
Code: [Select]
for (float alt=-(float)Math.PI; alt < Math.PI; alt = alt+span) {
for (float az=-(float)Math.PI; az < Math.PI; az = az+span) {

surface.addTriangle(p2c(alt,az,varHt(alt,az,r)),
p2c(alt+span,az,r),
p2c(alt,az+span,r);

surface.addTriangle(p2c(alt,az+span,r),
p2c(alt+span,az,r),
p2c(alt+span,az+span,r));
}
}

         protected SimpleVector p2c (float a, float z, float r) {
SimpleVector v = new SimpleVector();
v.x = (float) (r * Math.cos(a) * Math.sin(z));
v.y = (float) (r * Math.sin(a) * Math.sin(z));
v.z = (float) (r * Math.cos(z));
return v;
}

There is probably a way to access the triangles in the mesh later but I'm pretty new to JPct/OpenGL programming, so I am not sure how.

The tangent on the sphere below the camera should in theory be easier to get.  The tangent plane to the point on the sphere should be orthogonal (normal?) to the camera's current position vector, then to select a particular tangent vector on that plane you would somehow take the camera's current direction into account? Some sort of vector addition/subtraction/math there but it's where I am fuzzy.

I assume though that the sphere itself doesn't enter into the calculations for the camera orientation, it can be derived solely based on the camera's position and current orientation.   I was thinking that maybe what I want is to set the camera's XZ plane to something orthogonal to it's position, and then somehow modify the x and y orientation of the camera based on the mouse-look deltas, but leave the z orientation as-is.  No idea how to do that though...


so the summary of what I have/what I need is:

have:
camera current position (== a vector that is normal to the polygon on the sphere below the current position I think)
camera current direction

need:
way to set camera Z orientation based on the above, leaving camera x and y orientation as-is.

Thanks for replying Egon, appreciate the  help.  This board seems pretty active and well run!
Ed


6
Support / controlling camera orientation on a sphere surface?
« on: January 25, 2012, 11:43:12 pm »
Hi,
I'm trying to get the camera orientation correct such that the player can walk on (or fly over) the surface of a sphere (for simplicity, assume the sphere is centered at JPCT world center 0,0,0). 

For the x and y camera axes, I'd like to just use "mouse look" values to set camera X and Y rotation to change the camera orientation, so that the player can move left/right on the surface of the sphere and look up or down. I'd also like to use a normal vector to the sphere surface to move the camera down until it collides with the sphere for simulating gravity. I've adapted code I've used on flat worlds and tested it out on the sphere, and it seems like it should be fine.

The problem I've been having is adjusting the camera Z-axis orientation so the camera view is always parallel to the surface.  At the moment i don't adjust the Z axis, so it's only correct if you are traveling along a major axis of the sphere, other locations/orientations on the sphere and the camera's Z rotation is wrong and the camera view looks tilted.

I've tried various things, such as taking the current camera position, converting it to polar (alt,az,radius) coordinates, then like adding 1 to the radius (to get a polar coordinate above the camera), and converting it back to cartesian x,y,z.

I've tried using that vector and also the normalized version of that vector as the "up" vector for the camera setOrientation, (something like camera.setOrientation(currentCamDirection,newUp).   But I get bizarre / incorrect results, camera does not point anywhere useful and mouse control of the camera x and y rotation becomes messed up.

I'm a bit of a 3D vector geometry dummy so I'm sure there's a simpler approach or something, suggestions/ideas welcome.

Thanks!
Ed




Pages: [1]