Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 340/540 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • 2-components color model

    - by Cyan
    RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL & HSV from the 70's, and so on. All these models tend to share a common property : they are based on 3 components. Therefore my question is : Does it exist a 2-components color model ? I'm surprised to not find any. I was expecting something along the line of Hue+light could exist. I guess it cannot be as "complete" as a true 3-components color model, but a fine-enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL_COMPRESSED_RED_GREEN_RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance.

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • AndEngine player, background and camera

    - by valdemar593
    I'm developing a 2D shooter using AndEngine. At the moment I'm trying to make the camera follow the player. As I've understood the common approach is to use the SmoothCamera zooming it and setting the chased entity. The problem is that the camera follows the player WITH background moving also (RepeatingSpriteBackground), so it looks like the player doesn't move at all though the actual position changes. So I don't really get how to make the camera follow the player and have the background not moving. Thanks in advance.

    Read the article

  • How to texture voxel terrain without triplanar texturing?

    - by Thelvyn
    How can a voxel terrain (marching cubes) be textured without triplanar mapping ? The goal being to have more artistic freedom. I think, I could unwrap the mesh while extracting the isosurface then use projective painting. But I do not know how to handle terrain modifications without breaking the texture. I also guess that virtual texturing could help here. Links for these matters would be appreciated.

    Read the article

  • How to attach an object to a rotating circle in box2d cocos2d?

    - by armands
    I am trying to make an object get attached on a collision point to a circle that is rotating, but the player needs to get attached with a constant point on the player. For example the player is moving back and forth and when the user touches the screen and the player jumps up but what I need is that when the player collides with the circle it attaches it's legs to it and continues rotating with the circle. So I wanted to know how to make this kind of collision joint in cocos2d box2d?

    Read the article

  • Matrix loading problems with jbullet and lwjgl

    - by Quintin
    The following code does not load the matrix correctly from jbullet. //box is a RigidBody Transform trans = new Transform(); trans = box.getMotionState().getWorldTransform(trans); float[] matrix = new float[16]; trans.getOpenGLMatrix(matrix); // pass that matrix to OpenGL and render the cube FloatBuffer buffer = ByteBuffer.allocateDirect(4*16).asFloatBuffer().put(matrix); buffer.rewind(); glPushMatrix(); glMultMatrix(buffer); glBegin(GL_POINTS); glVertex3f(0,0,0); glEnd(); glPopMatrix(); the jbullet is configured as so: CollisionConfiguration = new DefaultCollisionConfiguration(); dispatcher = new CollisionDispatcher(collisionConfiguration); Vector3f worldAabbMin = new Vector3f(-10000,-10000,-10000); Vector3f worldAabbMax = new Vector3f(10000,10000,10000); AxisSweep3 overlappingPairCache = new AxisSweep3(worldAabbMin, worldAabbMax); SequentialImpulseConstraintSolver solver = new SequentialImpulseConstraintSolver(); dynamicWorld = new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration); dynamicWorld.setGravity(new Vector3f(0,-10,0)); dynamicWorld.getDispatchInfo().allowedCcdPenetration = 0f; CollisionShape groundShape = new BoxShape(new Vector3f(1000.f, 50.f, 1000.f)); Transform groundTransform = new Transform(); groundTransform.setIdentity(); groundTransform.origin.set(new Vector3f(0.f, -60.f, 0.f)); float mass = 0f; Vector3f localInertia = new Vector3f(0, 0, 0); DefaultMotionState myMotionState = new DefaultMotionState(groundTransform); RigidBodyConstructionInfo rbInfo = new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia); RigidBody body = new RigidBody(rbInfo); dynamicWorld.addRigidBody(body); dynamicWorld.clearForces(); Nothing is rendered on the screen. What am I doing wrong?

    Read the article

  • XNA Octree with batching

    - by Alex
    I'm integrating batching in my engine. However I'm using an octree which is auto generated around my scene. Now batching renders a hole group at ones while an octree sorts out which objects that should be rendered within the camera frustum, therefore dividing the group. Batching and octree doesn't go along very well, right? Problem: The way I see it I have two options, either create batch groups based on objects who are close to one another within the octree or I can rebuild the batching matrixbuffer for the instances visible each frame. Which approach should I go with or does there exist another solution?

    Read the article

  • What are some ways of making manageable complex AI?

    - by Tetrad
    In the past I've used simple systems like finite state machines (FSMs) or hierarchical FSMs to control AI behavior. For any complex system, this pattern falls apart very quickly. I've heard about behavior trees and it seems like that's the next obvious step, but haven't seen a working implementation or really tried going down that route yet. Are there any other patterns to making manageable yet complex AI behaviors?

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Sprite rotation

    - by Kipras
    I'm using OpenGL and people suggest using glRotate for sprite rotation, but I find that strange. My problem with it is that it rotates the whole matrix, which sort of screws up all my collision detection and so on and so forth. Imagine I had a sprite at position (100, 100) and in position (100, 200) is an obstacle and the sprite's facing it. I rotate the sprite away from the obstacle and when move upwards my y axis, even though the projection shows like it's going away from the obstacle, the sprite will intersect it. So I don't see another way of a rotating a sprite and not screwing up all collision detection other than doing mathematical operations on the image itself. Am I right or am I missing something?

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • What's wrong with this OpenGL model picking code?

    - by openglNewbie
    I am making simple model viewer using OpenGL. When I want to pick an object OpenGL returns nothing or an object that is in another place. This is my code: GLuint buff[1024] = {0}; GLint hits,view[4]; glSelectBuffer(1024,buff); glGetIntegerv(GL_VIEWPORT, view); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix(x,y,1.0,1.0,view); gluPerspective(45,(float)view[2]/(float)view[4],1.0,1500.0); glMatrixMode(GL_MODELVIEW); glRenderMode(GL_SELECT); glLoadIdentity(); //I make the same transformations for normal render glTranslatef(0, 0, -zoom); glMultMatrixf(transform.M); glInitNames(); glPushName(-1); for(int j=0;j<allNodes.size();j++) { glLoadName(allNodes.at(j)->id); allNodes.at(j)->Draw(textures); } glPopName(); glMatrixMode(GL_PROJECTION); glPopMatrix(); hits = glRenderMode(GL_RENDER);

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • When attaching AI to a vehicle should I define all steps or try Line of Sight?

    - by ThorDivDev
    This problem is related to an intersection simulation I am building for university. I will try to make it as general as possible. I am trying to assign AI to a vehicle using the JMonkeyEngine platform. AIGama_JmonkeyEngine explains that if you wish to create a car that follows a path you can define the path in steps. If there was no physics attached whatsoever then all you need to do is define the x,y,z values of where you want the object to appear in all subsequent steps. I am attaching the vehicleControl that implements jBullet. In this case the author mentions that I would need to define the steering and accelerating behaviors at each step. I was trying to use ghost controls that represented waypoints and when on colliding the car would decide what to do next like stopping at a red light. This didn't work so well. Car doesn't face right. public void update(float tpf) { Vector3f currentPos = aiVehicle.getPhysicsLocation(); Vector3f baseforwardVector = currentPos.clone(); Vector3f forwardVector; Vector3f subsVector; if (currentState == ObjectState.Running) { aiVehicle.accelerate(-800); } else if (currentState == ObjectState.Seeking) { baseforwardVector = baseforwardVector.normalize(); forwardVector = aiVehicle.getForwardVector(baseforwardVector); subsVector = pointToSeek.subtract(currentPos.clone()); System.out.printf("baseforwardVector: %f, %f, %f\n", baseforwardVector.x, baseforwardVector.y, baseforwardVector.z); System.out.printf("subsVector: %f, %f, %f\n", subsVector.x, subsVector.y, subsVector.z); System.out.printf("ForwardVector: %f, %f, %f\n", forwardVector.x, forwardVector.y, forwardVector.z); if (pointToSeek != null && pointToSeek.x + 3 >= currentPos.x && pointToSeek.x - 3 <= currentPos.x) { aiVehicle.steer(0.0f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x > currentPos.x) { aiVehicle.steer(-0.5f); aiVehicle.accelerate(-40); } else if (pointToSeek != null && pointToSeek.x < currentPos.x) { aiVehicle.steer(0.5f); aiVehicle.accelerate(-40); } } else if (currentState == ObjectState.Stopped) { aiVehicle.accelerate(0); aiVehicle.brake(40); } }

    Read the article

  • Circular movement - eliminating speed ups near Y = 0

    - by Fibericon
    I have a basic algorithm to rotate an enemy around a 200 unit radius circle with center 0. This is how I'm achieving that: if (position.Y <= 0 && position.X > -200) { position.X -= 2; position.Y = 0 - (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } else { position.X += 2; position.Y = (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } It does work, and I've ensured that at no point does either X or Y equal NaN. However, when Y approaches 0, it seems to go significantly faster. This surprises me, because the Y values are locked to the X, which is being incremented by a steady amount. What can I do to smooth the speed?

    Read the article

  • Why are my sprite sheet's frames not visible in Cocos Builder?

    - by Ramy Al Zuhouri
    I have created a sprite sheet with zwoptex. Then I just dragged the .plist and .png files to my Cocos Builder project. After this I wanted to take a sprite frame and set it to a sprite: But the sprite image is empty! At first I thought it was just empty in Cocos Builder, and that it must have been working correctly when imported to a project. But if I try to run that scene with CCBReader I still see an empty sprite. Why?

    Read the article

  • Enemies don't shoot. What is wrong? [closed]

    - by Bryan
    I want that every enemy shoots independently bullets. If an enemy’s bullet left the screen, the enemy can shoot a new bullet. Not earlier. But for the moment, the enemies don't shoot. Not a single bullet. I guess their is something wrong with my Enemy class, but I can't find a bug and I get no error message. What is wrong? public class Map { Texture2D myEnemy, myBullet ; Player Player; List<Enemy> enemieslist = new List<Enemy>(); List<Bullet> bulletslist = new List<Bullet>(); float fNextEnemy = 0.0f; float fEnemyFreq = 3.0f; int fMaxEnemy = 3 ; Vector2 Startposition = new Vector2(200, 200); GraphicsDeviceManager graphicsDevice; public Map(GraphicsDeviceManager device) { graphicsDevice = device; } public void Load(ContentManager content) { myEnemy = content.Load<Texture2D>("enemy"); myBullet = content.Load<Texture2D>("bullet"); Player = new Player(graphicsDevice); Player.Load(content); } public void Update(GameTime gameTime) { Player.Update(gameTime); float delta = (float)gameTime.ElapsedGameTime.TotalSeconds; for(int i = enemieslist.Count - 1; i >= 0; i--) { // Update Enemy Enemy enemy = enemieslist[i]; enemy.Update(gameTime, this.graphicsDevice, Player.playershape.Position, delta); // Try to remove an enemy if (enemy.Remove == true) { enemieslist.Remove(enemy); enemy.Remove = false; } } this.fNextEnemy += delta; //New enemy if (fMaxEnemy > 0) { if ((this.fNextEnemy >= fEnemyFreq) && (enemieslist.Count < 3)) { Vector2 enemyDirection = Vector2.Normalize(Player.playershape.Position - Startposition) * 100f; enemieslist.Add(new Enemy(Startposition, enemyDirection, Player.playershape.Position)); fMaxEnemy -= 1; fNextEnemy -= fEnemyFreq; } } } public void Draw(SpriteBatch batch) { Player.Draw(batch); foreach (Enemy enemies in enemieslist) { enemies.Draw(batch, myEnemy); } foreach (Bullet bullets in bulletslist) { bullets.Draw(batch, myBullet); } } } public class Enemy { List<Bullet> bulletslist = new List<Bullet>(); private float nextShot = 0; private float shotFrequency = 2.0f; Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public bool Shot; public Enemy(Vector2 Pos, Vector2 Move, Vector2 Player) { this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; this.Remove = false; this.Shot = false; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, Vector2 PlayerPos, float delta) { nextShot += delta; for (int i = bulletslist.Count - 1; i >= 0; i--) { // Update Bullet Bullet bullets = bulletslist[i]; bullets.Update(gameTime, graphics, delta); // Try to remove a bullet... Collision, hit, or outside screen. if (bullets.Remove == true) { bulletslist.Remove(bullets); bullets.Remove = false; } } if (nextShot >= shotFrequency) { this.Shot = true; nextShot -= shotFrequency; } // Does the enemy shot? if ((Shot == true) && (bulletslist.Count < 1)) // New bullet { Vector2 bulletDirection = Vector2.Normalize(PlayerPos - this.vPos) * 200f; bulletslist.Add(new Bullet(this.vPos, bulletDirection, PlayerPos)); Shot = false; } if (!Remove) { this.vMove = Vector2.Normalize(PlayerPos - this.vPos) * 100f; this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth + 1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight + 1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch batch, Texture2D myTexture) { if (!Remove) { batch.Draw(myTexture, this.vPos, Color.White); } } } public class Bullet { Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public Bullet(Vector2 Pos, Vector2 Move, Vector2 Player) { this.Remove = false; this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, float delta) { if (!Remove) { this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth +1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight +1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch spriteBatch, Texture2D myTexture) { if (!Remove) { spriteBatch.Draw(myTexture, this.vPos, Color.White); } } }

    Read the article

  • Cannot find the Cocos2d templates

    - by PeterK
    I am about to upgrade to the last version of Cocos2d and would like to uninstall my current Cocos2d templates before installing the new one but cannot find the templates to delete. I have looked at a number of web comments on this such as Uninstall Cocos2D ans another uninstall example but to no avail. However, I still see Cocos2d in my Xcode (4.5) framework. I have been searching my directories but cannot find it. Is there anyone out there who can give me a hint where to find it so i can delete in?

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >