Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - Wojtek

#1
Support / Active lights in GLSLShader
August 23, 2011, 08:57:29 PM
Hello,

Is it possible to determine how many lights are assigned to particular Object3D?

What I want is to determine active lights count in my Fragment shader to not iterate through 8 lights.
There is an additional reason why I want to do that.
If I had previously specified 5 lights and disabled 3 of them, my shader is still getting values of old 3 lights.

Is there any specific method in JPCT to determine actual number of active lights associated to world, or is GLSLShader passing any uniform value which I would be able to use in my shader?

Thanks,
Wojtek
#2
Hello,

I have found problem with envmap textures on compiled objects.
Basically if object is compiled it is not possible to change envmap texture using PolygonManager.setPolygonTexture() method.

Steps to recreate:
1. set Config.glForceEnvMapToSecondStage = true;
2. create sphere using Primitives class
3. call calcTextureWrapSpherical(), build(), setEnvmapped(true)
4. assign two textures (some texture and envmap one)
5. call compile()
6. try to change textures to different ones

Expected results:
Both textures are changed and displayed correctly

Actual results:
Only first texture has been changed. It is not possible to change envmapped texture. If shared is used, it also does not get new texture


Optional steps using ObjectsEditor:
1. Download editor from: http://www.wojtech.virthost.pl/oe/oe_envmap.zip and run it using run.bat or run.sh
2. Create sphere from primitives
3. select expo_bamboo.png as Texture1 and set it's TextureMode to Multiply
4. select _env.png as EnvmapTexture and set it's TextureMode to Add [Both textures are visible]
5. change EnvmapTexture to _env2.png [Texture has been updated]
6. click Compiled checkbox [Object has been compiled correctly, textures are working ok]
7. change Texture1 and EnvmapTexture to other values

Actual result:
Texture1 has been changed
EnvmapTexture has not been changed

Thanks,
Wojtek

PS. One additional update: Calling setEnvmapped() has also no effect on compiled object.
#3
Support / Usage of shared GLSLShader
August 20, 2011, 10:39:26 PM
Hello,

I am wondering how to use one instance of GLSLShader with multiple objects.

I am pointing to the situation when my shader is using additional uniform value, and each object where shader will be applied have different value for that uniform.

What I read in docs:
* I can call setUniform() to pass value during runtime
* I can call Object3D.setRenderHook() to assign shader to object.
* I can overwrite beforeRendering() method to pass my uniforms

My question is how to detect which obejct3d my shared is currently rendering?

Thanks for help,
Wojtek
#4
Projects / ObjectsEditor
August 09, 2011, 11:51:57 PM
Hello,

Recently I was playing a little bit with JPCT and decided to create a visual editor to see how properties change are affecting rendered object.

As editor is quite helpful for me I thought it may be also helpful to other people.
Currently it does not have rich functionalities yet, however it allows to:
* modify environment properties like ambient light, background etc
* add multiple lights to the environment
* add multiple primitive elements (from Primitives class) to scene
* add rectangular billboard objects to scene
* import 3ds objects and add them to scene
* save/load environment properties
* save/load composed object
* define basic object properties (textures, color, lighting, geometry, shader etc)

Editor always works on compiled objects, so I have not added any properties that cannot be applied to compiled object.

Regarding to current limitations:
* it is not possible to save object in format that could be used later to import it in other application - currently it is only possible to watch effect and replay it by applying the same changes in other application
* I have problem with removing textures from Object3D - I have not found any method in JPCT to reset object texture information
* I do not know how to reset texture information in shader, so if one object has 2 textures assigned and the second one has only one, then second object will be also rendered with 2 textures
* I am not sure how to manage properly resize of GLCanvas. When window is resized, then framebuffer and canvas are recreated and all present objects are disappearing...
* Shaders are not linking properly on some machines
* GUI is very very simple.....

Ok, and there are some screenshots:



If anybody is interested, here is link to compressed program: http://www.wojtech.virthost.pl/oe/ObjectsEditor.zip


Thanks,
Wojtek
#5
Support / Hardware only mode optimizations hints
May 08, 2010, 03:11:58 PM
Hello,

I am wondering if there are any settings that can be applied to decrease memory usage and/or increase performance if application uses only hardware rendering mode.

For example there is a Config.maxPolysVisible setting that I had to set to a big value to be able to display my scenes. I have noticed (please correct me if I am wrong) that when I started using compiled objects, this setting could be set to a much lower value and still the scene is displayed correctly.

So my question is if there are any options that could be turned off (or adjusted) if the hardware mode is used:
a) with compiled objects
b) without compiled object
and no software mode compatibility is needed?

Thanks,
Wojtek
#6
Support / Compiled dynamic objects
April 21, 2010, 10:54:44 PM
Hey,

I am curious how to make compiled dynamic objects with custom vertex controller. I haven't found any example on forum or wiki about it, but documentation says that compile(true) method must be used to be able to modify vertexes and touch() method needs to be called everytime when they are changed.

I have created object in following way:

public Ray()
    {
super(4);

for (int i = 0; i < VERTEXES.length; ++i)
    vertexes[i] = new SimpleVector(VERTEXES[i]);

addTriangle(VERTEXES[HF1], 1, 1, VERTEXES[HT1], 1, 0, VERTEXES[HT2], 0, 0);
addTriangle(VERTEXES[HT2], 0, 0, VERTEXES[HF2], 0, 1, VERTEXES[HF1], 1, 1);
addTriangle(VERTEXES[VF1], 1, 1, VERTEXES[VT1], 1, 0, VERTEXES[VT2], 0, 0);
addTriangle(VERTEXES[VT2], 0, 0, VERTEXES[VF2], 0, 1, VERTEXES[VF1], 1, 1);

setCulling(Object3D.CULLING_DISABLED);
setLighting(Object3D.LIGHTING_NO_LIGHTS);
setTransparency(0);
build();
getMesh().setVertexController(new RayVertexController(),
    IVertexController.PRESERVE_SOURCE_MESH);
compile(true);
    }


and run periodically following method to change its shape:


public void initialize(SimpleVector from, SimpleVector to, double scale, Color color,
    int transparencyMode)
    {
setVertexes(from, to, (float) scale);
setAdditionalColor(color);
setTransparencyMode(transparencyMode);
getMesh().applyVertexController();
touch();
setCenter(SimpleVector.ORIGIN);
setOrigin(SimpleVector.ORIGIN);
getRotationMatrix().setIdentity();
setRotationPivot(SimpleVector.ORIGIN);
    }


Unfortunatelly I do not see any change on screen, ie. object is always displayed at it initial place (ie. first position that was set).

Is there anything more that needs to be set?

Thanks,
Wojtek
#7
Hello,

Recently I have found strange behavior.
I have a class that allows me to create simple Rays. It uses VertexController to change ray dimensions which gives me possibility to reuse objects. I use it for laser or ship fumes effects.

Now I have other objects displayed as billboards - planets, stars etc.

I have noticed recently, that scaled billboard is displayed sometimes over the ray despite fact that ray is much closer to camera.

There are screensots of simple app where ray is from [-20,0,-3] to [0,-20,-3] and box of size (2,2) is located at [-150,-150,-40]:

The billboard does not have scale applied.

A scale is applied ( box.setScale(6); ).


Here is an example application:
//test.java

import java.awt.BorderLayout;
import java.awt.Canvas;
import java.awt.Color;
import java.awt.Graphics;
import javax.swing.JFrame;
import com.threed.jpct.Camera;
import com.threed.jpct.FrameBuffer;
import com.threed.jpct.IRenderer;
import com.threed.jpct.Object3D;
import com.threed.jpct.Primitives;
import com.threed.jpct.SimpleVector;
import com.threed.jpct.World;

public class Test extends JFrame implements Runnable
{
    private static final long serialVersionUID = 811111147457394977L;
    private World world;
    private FrameBuffer buffer;
    private Canvas myCanvas;
    private boolean alive = true;
    private boolean initialized = false;
    private Ray ray;
    private Object3D box;

    public static void main(String[] args)
    {
new Test();
    }

    public Test()
    {
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
init();
pack();
setVisible(true);
    }

    public void init()
    {
world = new World();
ray = new Ray();

ray.initialize(new SimpleVector(-20, 0, -3), new SimpleVector(0, -20,
-3), 0.5, Color.YELLOW, Object3D.TRANSPARENCY_MODE_DEFAULT);
box = Primitives.getBox(2, 2);

box.setBillboarding(true);
box.setAdditionalColor(Color.WHITE);
box.setLighting(Object3D.LIGHTING_NO_LIGHTS);
box.setTransparency(0);
box.setTransparencyMode(Object3D.TRANSPARENCY_MODE_ADD);
box.setTransparency(255);
box.setScale(6);
box.build();
box.compile();
box.translate(-150, -150, -40);

world.addObject(ray);
world.addObject(box);

Camera camera = world.getCamera();
camera.setPosition(0, 0, 0);
camera.lookAt(box.getTransformedCenter());
world.setAmbientLight(100, 100, 100);

buffer = new FrameBuffer(640, 480, FrameBuffer.SAMPLINGMODE_GL_AA_4X);
buffer.disableRenderer(IRenderer.RENDERER_SOFTWARE);

myCanvas = buffer.enableGLCanvasRenderer();
add(myCanvas, BorderLayout.CENTER);
initialized = true;
new Thread(this).start();
    }

    @Override
    public void run()
    {
while (alive)
{
    this.repaint();
    try
    {
Thread.sleep(1000);
    }
    catch (InterruptedException e)
    {
    }
}
alive = false;
    }

    @Override
    public void paint(Graphics g)
    {
if (!initialized)
    return;
buffer.clear();
world.renderScene(buffer);
world.draw(buffer);
buffer.update();
buffer.displayGLOnly();
myCanvas.repaint();

    }
}


//ray.java

public class Ray extends Object3D
{
    private static final long serialVersionUID = -1684665440334869826L;
    private static final SimpleVector YVEC = new SimpleVector(0, 1, 0);
    private static final SimpleVector ZVEC = new SimpleVector(0, 0, 1);
    private static int HF1 = 0;
    private static int HF2 = 1;
    private static int HT1 = 2;
    private static int HT2 = 3;
    private static int VF1 = 4;
    private static int VF2 = 5;
    private static int VT1 = 6;
    private static int VT2 = 7;
    private static SimpleVector[] VERTEXES = new SimpleVector[] {
    new SimpleVector(-1.5, -1.5, -1), new SimpleVector(-0.5, -0.5, -1),
    new SimpleVector(1.5, 1.5, -1), new SimpleVector(0.5, 0.5, -1),
    new SimpleVector(-1, -1, -1.5), new SimpleVector(-0.5, -0.5, -0.5),
    new SimpleVector(1.5, 1.5, 1.5), new SimpleVector(0.5, 0.5, 0.5) };
    private SimpleVector[] vertexes = new SimpleVector[VERTEXES.length];

    public Ray()
    {
super(4);

for (int i = 0; i < VERTEXES.length; ++i)
    vertexes[i] = new SimpleVector(VERTEXES[i]);

addTriangle(VERTEXES[HF1], 1, 1, VERTEXES[HT1], 1, 0, VERTEXES[HT2], 0,
    0);
addTriangle(VERTEXES[HT2], 0, 0, VERTEXES[HF2], 0, 1, VERTEXES[HF1], 1,
    1);
addTriangle(VERTEXES[VF1], 1, 1, VERTEXES[VT1], 1, 0, VERTEXES[VT2], 0,
    0);
addTriangle(VERTEXES[VT2], 0, 0, VERTEXES[VF2], 0, 1, VERTEXES[VF1], 1,
    1);

setCulling(Object3D.CULLING_DISABLED);
setLighting(Object3D.LIGHTING_NO_LIGHTS);
setTransparency(0);
build();
getMesh().setVertexController(new RayVertexController(),
    IVertexController.PRESERVE_SOURCE_MESH);
    }

    public void initialize(SimpleVector from, SimpleVector to, double scale, Color color,
    int transparencyMode)
    {
setVertexes(from, to, (float) scale);
setAdditionalColor(color);
setTransparencyMode(transparencyMode);
getMesh().applyVertexController();
setCenter(SimpleVector.ORIGIN);
setOrigin(SimpleVector.ORIGIN);
getRotationMatrix().setIdentity();
setRotationPivot(SimpleVector.ORIGIN);
    }

    private void setVertexes(SimpleVector from, SimpleVector to, float scale)
    {
scale /= 2;

SimpleVector diff = to.calcSub(from);
SimpleVector ya = SimpleVectorUtil.mul(
    diff.calcCross(YVEC).normalize(), scale);
SimpleVector za = SimpleVectorUtil.mul(
    diff.calcCross(ZVEC).normalize(), scale);

vertexes[HF1].set(from.x - ya.x, from.y - ya.y, from.z - ya.z);
vertexes[HF2].set(from.x + ya.x, from.y + ya.y, from.z + ya.z);
vertexes[HT1].set(to.x - ya.x, to.y - ya.y, to.z - ya.z);
vertexes[HT2].set(to.x + ya.x, to.y + ya.y, to.z + ya.z);
vertexes[VF1].set(from.x - za.x, from.y - za.y, from.z - za.z);
vertexes[VF2].set(from.x + za.x, from.y + za.y, from.z + za.z);
vertexes[VT1].set(to.x - za.x, to.y - za.y, to.z - za.z);
vertexes[VT2].set(to.x + za.x, to.y + za.y, to.z + za.z);
    }

    public SimpleVector convertVertex(SimpleVector vertex)
    {
for (int i = 0; i < VERTEXES.length; ++i)
    if (VERTEXES[i].equals(vertex))
return vertexes[i];
return vertex;
    }

    class RayVertexController extends GenericVertexController
    {
private static final long serialVersionUID = -3596694596175935772L;

@Override
public void apply()
{
    SimpleVector[] src = getSourceMesh();
    SimpleVector[] dst = getDestinationMesh();
    for (int i = 0; i < src.length; ++i)
    {
dst[i].set(convertVertex(src[i]));
    }
}
    }
}


//simplevectorutil.java

public class SimpleVectorUtil
{

    public static SimpleVector mul(SimpleVector vec, float mul)
    {
vec.x *= mul;
vec.y *= mul;
vec.z *= mul;
return vec;
    }

    public static SimpleVector calcMul(SimpleVector vec, float mul)
    {
return mul(new SimpleVector(vec), mul);
    }
}


Is it a bug, or perhaps I have to set some additional parameters to have that working correctly?

Thanks,
Wojtek
#8
Support / FrameBuffer initialization
March 15, 2010, 08:21:53 PM
Hello,

I would like to ask some questions related to FrameBuffer hardware mode initialization process.

My code for that is

buffer = new FrameBuffer(currentWidth, currentHeight, FrameBuffer.SAMPLINGMODE_GL_AA_4X);
canvas = buffer.enableGLCanvasRenderer();
buffer.disableRenderer(IRenderer.RENDERER_SOFTWARE);

where currentWidth and currentHeight are dimensions of panel where canvas is displayed.

Well, first of all I am getting following errors in logs:

Java version is: 1.6.0_17
-> support for BufferedImage
Version helper for 1.5+ initialized!
-> using BufferedImage
Software renderer (OpenGL mode) initialized
Using LWJGL's AWTGLCanvas
Can't find desired videomode (1232 x 555 x 32) - searching for alternatives
Can't find alternative videomode (1232 x 555 x 32) - trying something else
[ Mon Mar 15 19:58:05 CET 2010 ] - ERROR: Can't find any suitable videomode!
[ Mon Mar 15 19:58:05 CET 2010 ] - ERROR: Can't set videomode - try different settings!
Software renderer disposed

I have read in docs that width and height must match to the one of available VideoModes for hardware mode, however when I ignore it, the game is running fine and the canvas content is displayed correctly - it fills to the panel content.
My question is how it is working in that case and what is the consequence of ignoring those error messages?

The other thing is that I do not see any difference between SAMPLINGMODE_NORMAL, SAMPLINGMODE_GL_AA_2X and SAMPLINGMODE_GL_AA_4X mode. Is there anything else that I have to do to enable it?
The logs about graphics card are:

Driver is: vga/6.0.6001.18000 on NVIDIA Corporation / GeForce 9650M GT/PCI/SSE2
GL_ARB_texture_env_combine supported and used!
FBO supported and used!
OpenGL renderer initialized (using 4 texture stages)


There is an screenshot of my ship, and I am assuming the AA is not working because I see sharp shapes of ship and station:



Thanks,
Wojtek
#9
Feedback / Survey
February 28, 2010, 11:53:58 AM
Hello,

My friend writes a PhD Dissertation at Faculty of Management and Social Communication at the Jagiellonian University in Krakow, Poland. He is looking for programmers working in different countries and different cultures and he would be very thankful if You could help him with research by filling in the online questionnaire: http://www.survey.k-informatix.com/survey.php?code=ad0234 and/or share this post among Your colleagues.

The research is on organizational culture and its connection to a burnout syndrome and should not take more than 15-20 minutes.

Thanks,
Wojtek
#10
Support / Helper class for camera operations
February 09, 2010, 01:15:14 AM
Hello,

I am trying to create a helper class that will attach camera to an object and allow to do following things:
1. show object from different angles (orbit camera around object)
2. follow the object (ie. when object moves, camera is also moving after it)
3. zoom in / zoom out
4. ignore object rotations (ie. when object is rotating, the camera is rotating with it, so it is looking always at the same point).

With your help (especialy paulscode - thanks a lot btw), I have created a simple class that does the most of that functionality (except 4th point).
Recently I started correcting it to make the 1st point working better and to implement missing one.

Well I have made a first thing working ok, but because of my lacks of 3d math knowledge I stucked in the last task.

I do not know how to get horizontal rotation angle (Y) and vertical rotation angle (Z) by which object3D is rotated. I need those angles to be able to rotate camera.
Can anybody help me with it?
I am asking about angles because matrix operations are still a little bit too difficult for me to understand so I have made my class working in following way:

1. During the initialization, I am positioning the camera after object
SimpleVector vec = new SimpleVector(0, -getDistance(), 0);
vec.add(object.getTransformedCenter());
cam.setPosition(vec);

and setting the camera orientation by:
cam.setOrientation(new SimpleVector(0, 1, 0), new SimpleVector(0, 0, -1));
2. During the update of camera position (just before rendering) I am setting camera to the same point as object is and move it out by getDistance():
cam.setPosition(target.getTransformedCenter());    
cam.moveCamera(Camera.CAMERA_MOVEOUT, getDistance());

3. During camera rotation (orbiting effect) I am:
* moving camera in by getDistance(), saving its position, and setting pos to [0,0,0]
* rotating back X and Y by angles the camera was rotated in previous step
* increasing angles by delta values
* rotating Y and X by new angles
* restoring camera last position by moving camera out and adding saved position

private void restoreCameraPosition(Camera cam, SimpleVector shift)
{
cam.moveCamera(Camera.CAMERA_MOVEOUT, getDistance());
shift.add(cam.getPosition());
cam.setPosition(shift);
}

private SimpleVector moveCameraToZeroPosition(Camera cam)
{
cam.moveCamera(Camera.CAMERA_MOVEIN, getDistance());
SimpleVector shift = cam.getPosition();
cam.setPosition(new SimpleVector(0, 0, 0));
return shift;
}

@Override
public void rotate(double deltaX, double deltaY)
{
Camera cam = getCamera();
SimpleVector shift = moveCameraToZeroPosition(cam);
cam.rotateX((float) -vAngle);
cam.rotateY((float) -hAngle);
vAngle = limitVerticalAngle(MathUtil.normalizeAngle(vAngle + deltaY
/ 200));
hAngle = MathUtil.normalizeAngle(hAngle - deltaX / 200);
cam.rotateY((float) hAngle);
cam.rotateX((float) vAngle);
restoreCameraPosition(cam, shift);
}



Now, what I am planning to do is to modify the method that is updating camera position and, get (somehow) the rotation angles from target object and apply them to camera in the same way as I am doing it during camera rotation (ie. I will add them to the stored hAngle and vAngle). Probably I will have to store both angles separately to distinguish what angle is related to change the camera orbit position and what is related to object rotation.

When I finish that work I will post the full code for that class hoping that it may be usefull for others too...

Thanks,
Wojtek
#11
Support / How LensFlare hiding works?
February 07, 2010, 01:50:48 AM
Hi,

I was playing a little bit with the LensFlare class. It i a very nice feature :)
I see in the documentation that it is possible to enable hiding of that effect when there is something on a way between camera and light source (in fact it is enabled by default). During my play with that effect I was not able to achieve that and I was even able to see the same behavior on a terrain demo...
The first image shows the sun and lens flare effect:

When I hide behind mountain, the effect is still applied:

Is is a correct behavior? And if yes, how the hiding feature works?

Thanks,
Wojtek
#12
Support / Question about transparency
February 07, 2010, 01:40:56 AM
Hello,

I have a problem with setting correct transparency value for billboard image.
I have a spherical object that displays the galaxy background (stars etc) and a flat object with billboard flag (moon) that is closer to the camera than galaxy background.
The moon has transparency set to 255 and the texture image has fully transparent corners (to make moon round) and non transparent center (moon globe). The problem is that the stars from background image are rendered over moon surface (which should not happen, because moon should fully cover the background). Please see the screenshot.

Do you have any idea what could be wrong? I cannot set the transparency to -1 for moon object because the texture corners will be displayed :(
I can add that I am rendering it with hardware renderer.

Thanks,
Wojtek
#13
Support / Normal maps
December 09, 2009, 09:41:47 PM
Hello,

Recently, I have read in book on Blender that Normal Maps could be quite interesting and useful in making low-poly objects look better. Those maps uses RGB values to determine the shift of any given pixel in XYZ dimensions.

I wonder how useful is that in practice and if it is possible to use that in JPCT framework? If not, then are there any plans to add that functionality? I would also add, that I am mostly interested about hardware rendering mode.

The reason I asked is that I think it may be useful to make low-poly spherical shapes look more smooth.

Thanks,
Wojtek
#14
Support / How to add texture to textured object
November 16, 2009, 04:43:13 PM
Hello,

Is it possible to add additional texture to loaded 3ds object that already has some textures?

I see that there is a public void setTexture(TextureInfo tInf) method, but it replaces all existing textures that are associated to the object and I do not see any methods that will be able to return texture list that is already assigned to object.

What I want to do is to add an additional texture that will be used as envmap (just like in viper example http://www.jpct.net/forum2/index.php/topic,988.0.html).

Thanks,
Wojtek
#15
Hello,

Recently I tried to use full screen mode in my game but I found that GLCanvases that I use did not showed up after my change.

Here is an example code that shows my problem:

package Test;

import java.awt.BorderLayout;
import java.awt.Canvas;
import java.awt.Color;
import java.awt.DisplayMode;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.BoxLayout;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.Timer;

import com.threed.jpct.FrameBuffer;
import com.threed.jpct.IRenderer;
import com.threed.jpct.Object3D;
import com.threed.jpct.Primitives;
import com.threed.jpct.SimpleVector;
import com.threed.jpct.World;

public class FullScreenTest extends JFrame
{
    private GraphicsDevice device;
    private FrameBuffer buffer;
    private Canvas canvas;
    private World world;
    private Object3D object;
    private Timer timer;

    public static void main(String[] args)
    {
FullScreenTest frame = new FullScreenTest();
frame.setVisible(true);
    }

    public FullScreenTest()
    {
initResolution();
initComponents();
initCanvas();
initWorld();
startRendering();
    }

    private void startRendering()
    {
timer = new Timer(50, new ActionListener()
{

    @Override
    public void actionPerformed(ActionEvent e)
    {
renderScene();
    }
});
    }

    protected void renderScene()
    {
object.rotateX(0.1f);
object.rotateY(0.2f);

buffer.clear(java.awt.Color.BLACK);

world.renderScene(buffer);
world.draw(buffer);
buffer.update();
buffer.displayGLOnly();
canvas.repaint();
    }

    private void initWorld()
    {
world = new World();
world.setAmbientLight(10, 10, 10);
world.addLight(new SimpleVector(0, 0, -150), 40, 40, 40);

world.getCamera().setPosition(0, 0, -10);
world.getCamera().lookAt(new SimpleVector(0, 0, 0));

object = Primitives.getCube(2);
object.setAdditionalColor(Color.RED);
object.build();
object.setCenter(new SimpleVector(0, 0, 0));
world.addObject(object);
    }

    private void initCanvas()
    {
buffer = new FrameBuffer(400, 400, FrameBuffer.SAMPLINGMODE_NORMAL);
buffer.disableRenderer(IRenderer.RENDERER_SOFTWARE);

canvas = buffer.enableGLCanvasRenderer();
add(canvas, BorderLayout.CENTER);
    }

    private void initComponents()
    {
setDefaultCloseOperation(EXIT_ON_CLOSE);
JPanel buttonBox = new JPanel();
buttonBox.setLayout(new BoxLayout(buttonBox, BoxLayout.LINE_AXIS));
getContentPane().add(buttonBox, BorderLayout.PAGE_END);
JButton button = new JButton("Start");
buttonBox.add(button);
button.addActionListener(new ActionListener()
{
    @Override
    public void actionPerformed(ActionEvent arg0)
    {
timer.start();
    }
});
button = new JButton("Exit");
buttonBox.add(button);
button.addActionListener(new ActionListener()
{
    @Override
    public void actionPerformed(ActionEvent arg0)
    {
timer.stop();
device.setFullScreenWindow(null);
setVisible(false);
dispose();
    }
});
    }

    private void initResolution()
    {
setUndecorated(true);
setResizable(false);

device = GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices()[0];
device.setFullScreenWindow(this);
DisplayMode mode = device.getDisplayMode();
setSize(mode.getWidth(), mode.getHeight());

    }
}


If I comment out the device.setFullScreenWindow(this) line it works.
Does anyone know what I have to change to make it work in full screen mode?
I use: JPCT 1.18 and java 1.6

Thanks,
Wojtek
#16
Support / Camera UP vector
August 11, 2009, 10:47:20 PM
Hello,

I would like to allow player to move camera around the player's ship to see it from different perspective.
The following image shows what I am planning to do: .

First I have tried to use Camera.lookAt() method, however when I run the game it does not work as I wanted.
It is difficult for me to explain why, so i attached 3 screenshots showing how it works with lookAt() method:



What I want is not to see the effect of rotating ship, but have the effect that player is 'flying' around the ship.


After that I have started playing with Camera.setOrientation() method. I am using following code to calculate direction vector:

SimpleVector direction = object.getTransformedCenter();
direction=direction.calcSub(camera.getPosition());


It seems to work for me, however I have no idea how to calcluate up vector. If I understand it correctly its purpose is to show the direction where is the 'sky' so it suppose to be a perpendicular vector to direction vector (I have showed it as a small blue line at the first picture). I am trying to make it work a second day but without result :( Can anyone help me and give a hint how to calculate it?

Thanks,
Wojtek
#17
Hello,

I am newbie in 3d rendering and jpct engine, so excuse me for naive questions, but I am wondering how to specify what part of object (or which objects) should glow/shine.
I have created a simple gray space ship with blue cockpit window and light-blue engine. The ship does not have any texture - I have just specified colors for it's parts in blender.

I wanted to make the ship's engine glowing, so I have added the BloomGLProcessor to FrameBuffer as a post processor. When I ran the application I have noticed that the cockpit window is also glowing (in fact, it is glowing more than the engine).

Can you please give me some hints what should I do to avoid the situations where unwanted parts of objects are glowing?

Here I added the link to my test application: http://www.megaupload.com/?d=1FO7A72O

Thanks,
Wojtek