Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 563/1071 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • Things to do to port game made for iOS in Unity to Android?

    - by 2600th
    I have just made my first game for iOS and submitted it to app store. I was thinking of porting my game to Android also. I would like to know things one need to do/remember to port game made for iOS in Unity to Android. How to handle different screen resolutions and pixel densities, optimizations required, etc. Any other suggestions and important things you think I should know? EDIT: Also, should I handle builds according to device resolutions or by pixel density?

    Read the article

  • Creating a WARP device in managed DirectX

    - by arex
    I have a very old graphic card that only supports shader model 2, but I need shader model 3 or up for the app I am developing. I tried to use a reference device but it seems to run very slowly, then I found some samples in C++ that allows me to change to a WARP device and the performance is good. I am using C# and I don't know how to create such type of device. So the question is: how do I create a WARP device in C#? Thanks in advance.

    Read the article

  • What's wrong with this Open GL ES 2.0. Shader?

    - by Project Dumbo Dev
    I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones…), but when I try it on a LG-E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well): EDITED Shader: uniform mat4 u_Matrix; uniform int u_XSpritePos; uniform int u_YSpritePos; uniform float u_XDisplacement; uniform float u_YDisplacement; attribute vec4 a_Position; attribute vec2 a_TextureCoordinates; varying vec2 v_TextureCoordinates; void main(){ v_TextureCoordinates.x= (a_TextureCoordinates.x + u_XSpritePos) * u_XDisplacement; v_TextureCoordinates.y= (a_TextureCoordinates.y + u_YSpritePos) * u_YDisplacement; gl_Position = u_Matrix * a_Position; } Log reports this before loading/compiling shader: 11-05 18:46:25.579: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x51984000 size:5570560 offset:4956160 fd:46 11-05 18:46:25.629: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x5218d000 size:5836800 offset:5570560 fd:49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged: ERROR FBIOGET_ESDCHECKLOOP fail, from msm7627a.gralloc Edited: "InfoLog:" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution + More questions: Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks.

    Read the article

  • boolean operations on meshes

    - by lathomas64
    given a set of vertices and triangles for each mesh. Does anyone know of an algorithm, or a place to start looking( I tried google first but haven't found a good place to get started) to perform boolean operations on said meshes and get a set of vertices and triangle for the resulting mesh? Of particular interest are subtraction and union. Example pictures: http://www.rhino3d.com/4/help/Commands/Booleans.htm

    Read the article

  • How do I make camera move at same speed when rotating and moving forward

    - by dez
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. I've post code below. #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

  • What are some ways of making manageable complex AI?

    - by Tetrad
    In the past I've used simple systems like finite state machines (FSMs) or hierarchical FSMs to control AI behavior. For any complex system, this pattern falls apart very quickly. I've heard about behavior trees and it seems like that's the next obvious step, but haven't seen a working implementation or really tried going down that route yet. Are there any other patterns to making manageable yet complex AI behaviors?

    Read the article

  • How do i add a start menu page to my java game?

    - by user2149407
    I have a rather cool space invaders game that my friend and I have been working on for a while, and we have decided it needs an opening page, with "Start" options, "Quit" options and so forth. I have looked at several methods online, but cant seem to get any of them to work! Does anybody have any ideas? P.S Using JFrame to draw the main frame Im just looking to do this within Java, so just a panel that appears at a state change (GAME, MENU). Id like it to contain a few buttons to start the game, and quit. Later, I will add achievements, but im after something really basic for now. But thanks for the suggestions!

    Read the article

  • XNA stopped compiling my model x files

    - by HuseyinUslu
    So I've a 3d game project I'm working on and I'm using 2 model files (SkyBlock.x and AimedBlock.x). So until now everything was all good and my models files were compiled all okay and I was able to use them within my game. With the latest changes (which I don't know what caused it really) - XNA stopped compiling my model files and instead only outputs files; AimedBlockxnb - 1kb SkyDome.xnb - 1kb SkyDomeTexture.xnb - 1389 kb SkyDomeTexture_0.xnb - 419 kb So I created a test XNA game project and moved all my asset's to new solution content project's and tried compiling them and saw that they're all good. AimedBlockxnb - 2kb SkyDome.xnb - 13kb SkyDomeTexture.xnb - 4097 kb SkyDomeTexture_0.xnb - 683 kb So I guess my main project sucks there but I couldn't came with a solution. I even tried overwriting my game's content project with new game's content project (which was all okay) but it didn't work. Anybody had similar issues?

    Read the article

  • Why is the MaskBit maxed out

    - by CStreel
    Hi there for some reason the maskbit of my b2FixtureDef is being maxxed out and im not sure why Here is the declaration of the items that are used in the game enum PhysicBits { PB_NONE = 0x0000, PB_PLAYER = 0x0001, PB_PLATFORM = 0x0002 }; Basically what i want is the player to run along a surface is not slow down (i set platform & player friction to 0.0f) I then setup my Contact Listener to print out the connections (currently only have 1 platform and 1 player) Player Fixture Def b2FixtureDef fixtureDef; fixtureDef.shape = &groundBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; fixtureDef.filter.categoryBits = PB_PLAYER; fixtureDef.filter.maskBits = PB_PLATFORM; Platform Fixture Def b2FixtureDef fixtureDef; fixtureDef.shape = &groundBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; fixtureDef.filter.categoryBits = PB_PLATFORM; fixtureDef.filter.maskBits = PB_PLAYER; Now correct me if im wrong but these are saying the following: Player Collides with Platform Platform Collides with Player Here is the printout of the fixtures colliding with each other ******** <-- Indicates new Contact Platform ContactA: 2 MaskA: 1 ------ Player ContactB: 1 MaskB: 2 ******** <-- Indicates new Contact Platform ContactA: 2 MaskA: 1 ------ Player ContactB: 1 MaskB: 65535 ******** <-- Indicates new Contact Platform ContactA: 1 MaskA: 65535 ------ Player ContactB: 1 MaskB: 65535 Here is where i am confused. On the second & third contact the player maskBit is set to 65535 when it should be 2 and there are 3 contacts when i am sure at most there should only be 2. I've been trying to figure this out for hours and i can't understand why it is doing this. I would be very grateful is someone could shine some light on this for me UPDATE: **I printed out the class of the contacting objects. For some reason it seems to do the following: First Contact: Correct Result. Second Contact: Player b2Fixture Obtains a new maskBit. Third Contact: Platform b2Fixture appears to be set to the same as the Player b2Fixture. It would seem I have a memory race condition i think**

    Read the article

  • Relative Positions Of Player And Enemy Are Different In XNA 3D Game

    - by CoOlDud3
    I am having a problem in my 3D Jet Fighter Game using XNA. I have a Player Jet and a few enemy drones built from a separate class. The problem is that when I set Player position and a drone's position to a height 10f in y direction. They aren't at the same height. But if i move Drone's Position up 500f in the y direction then it is pretty much close to the player. Relatively They are supposedly at the same height but with different position values. Can Any One Help Please?

    Read the article

  • Which game engine for HTML5 + Node.js

    - by Chrene
    I want to create a realtime multiplayer game using and HTML5. I want to use node.js as the server, and I only need to be able to render images in a canvas, play some sounds, and do some basic animations. The gameloop should be done in the server, and the client should do callback via sockets to render the canvas. I am not going to spend any money on the engine, and I don't want to use cocos2d-javascript.

    Read the article

  • How do I retain previously drawn graphics?

    - by Cromanium
    I've created a simple program that draws lines from a fixed point to a random point each frame. I wanted to keep each line on the screen. However, it always seems to be cleared each time it draws on the spriteBatch even without GraphicsDevice.Clear(color) being called. What seems to be the problem? protected override void Draw(GameTime gameTime) { spriteBatch.Begin(); DrawLine(spriteBatch); spriteBatch.End(); base.Draw(gameTime); } private void DrawLine(SpriteBatch spriteBatch) { Random r = new Random(); Vector2 a = new Vector2(50, 100); Vector2 b = new Vector2(r.Next(0, 640), r.Next(0,480)); Texture2D filler= new Texture2D(GraphicsDevice, 1, 1, false, SurfaceFormat.Color); filler.SetData(new[] { Color.Black }); float length = Vector2.Distance(a, b); float angle = (float)Math.Atan2(b.Y - a.Y, b.X - a.X); spriteBatch.Draw(filler, a, null, Color.Black, angle, Vector2.Zero, new Vector2(length,10.0f), SpriteEffects.None, 0f); } What am I doing wrong?

    Read the article

  • What does "kTriangles/s" mean in hardware graphics benchmark reports?

    - by swquinn
    I've looked around and found several sites offering benchmarking statistics for mobile platforms and I've been seeing the unit of measure as "kTriangles/s". Originally I misread this, missing the 'k'; does this translate to "thousand(s) of triangles/s", e.g.: 8902 kTriangles/s = 8,902,000 triangles/s (I'm pretty sure that my interpretation is correct, but I hope someone can confirm this for me) Thanks!

    Read the article

  • How to implement a multi-part snake with smooth movement? [closed]

    - by Jamie
    Sorry that i couldnt answer on my previous post but it got closed. I couldnt answer because i had to prepair for my finals. As there were problems with understanding of what im trying to achieve, im going to describe a little bit more in depth. Im creating a game in which you steer a snake. I assume everybody knows how that works. But in my case the snake isnt just propagating in an array element by element. Imagine a 2Dgrid on which the snake moves. Its 10x10 tiles. Lets say one tile is 4x4 meters. The snakes head spawns in the middle of the (3,2) tile (beginning with (0,0)), so its position is (4*3+2,4*2+2)(the 2's are so that the snake is in the middle of the 4x4 tile). And heres where the fun begins. when the snake moves, it doesnt jump to next tile. Instead it moves a fraction of the way there. So lets say the snake is heading to tile (4,2). After it moved once, its position is (4*3+2+0.1,4*2+2), where 0.1 is the fraction of the way it moved. This is done to achieve smooth movement. So now im adding the rest of the body. The rest is supposed to move along the exact same path as the head did. I implemented it so that each part of the body has its own position and direction. Then i apply this algorithm: 1.Move each part in its direction. 2.If a part is in the middle of the tile(which implies all of them are), change each parts direction to the direction of the part proceeding it. As i said before i could make this work, but i cant stop thinking that im overlooking a much easier and cleaner solution. So this is my question. Is there any easier/better/faster way to do this?

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • Do 3d assets cost a lot more than 2d?

    - by Balls
    I'm planning to create a game on my own and will most likely hire an artist in the future. I just want to know if making a game in 2d will a lot cheaper than making it on 3d? Here's my plan: If it will be a 2d game.. I'll probably make a platform game. More like a Braid level of graphics. If it will be a 3d game.. Closest of graphics I'll ask for will be far cry 1 or if possible oblivion. So any thoughts? I'm funding all of it on my own. It will be my first game but will use maybe an engine around if it will be a 3d game. If 2d, I have my own engine lying around here. Thank you, Balls

    Read the article

  • Effecient finding of long-range spotting targets

    - by nihohit
    I'm creating a top-down 2d strategy game, with a square grid map. So far, I've used Bresenham's line drawing algorithm in a circle to determine what's in LOS of each unit, and then targedt one of the targets in the circle. Now I find that this limits my units to shoot only at targets that they see. I want to extend my targeting algorithm to target any other unit in range of my weapon, even if they're out of sight range of this given unit, if they're "spotted" by another friendly unit. In other words, I want to enable usage of weapons with ranges longer than sight range. Is there a better way than iterating over all sighted units and computing range and LOSto each of them?

    Read the article

  • Game Design - When to separate out pieces into static libraries?

    - by Jason
    I am developing a game that has a lot of platform generic pieces. I am wanting to separate out various pieces into static libraries and I would like to know what other devs do. I am considering targeting other platforms and I want to maintain an much platform neutrality as I can. I have a lot of generic level data in C++ classes. THinking all of the level data could go into a single static library. I have a lot of generic OpenGL code that I think could also go into a single static library. I am already using CMAKE for some and XCode 4.5 for the Apple specific pieces. What do other devs do to stay platform neutral? Does anyone use Eclipse instead of XCode and Visual Studio on Windows?

    Read the article

  • Handling commands or events that wait for an action to be completed afterwards

    - by virulent
    Say you have two events: Action1 and Action2. When you receive Action1, you want to store some arbitrary data to be used the next time Action2 rolls around. Optimally, Action1 is normally a command however it can also be other events. The idea is still the same. The current way I am implementing this is by storing state and then simply checking when Action2 is called if that specific state is there. This is obviously a bit messy and leads to a lot of redundant code. Here is an example of how I am doing that, in pseudocode form (and broken down quite a bit, obviously): void onAction1(event) { Player = event.getPlayer() Player.addState("my_action_to_do") } void onAction2(event) { Player = event.getPlayer() if not Player.hasState("my_action_to_do") { return } // Do something } When doing this for a lot of other actions it gets somewhat ugly and I wanted to know if there is something I can do to improve upon it. I was thinking of something like this, which wouldn't require passing data around, but is this also not the right direction? void onAction1(event) { Player = event.getPlayer() Player.onAction2(new Runnable() { public void run() { // Do something } }) } If one wanted to take it even further, could you not simply do this? void onPlayerEnter(event) { // When they join the server Player = event.getPlayer() Player.onAction1(new Runnable() { public void run() { // Now wait for action 2 Player.onAction2(new Runnable() { // Do something }) } }, true) // TRUE would be to repeat the event, // not remove it after it is called. } Any input would be wonderful.

    Read the article

  • Android Bitmap: Collision Detecting

    - by Aekasitt Guruvanich
    I am writing an Android game right now and I would need some help in the collision of the Pawns on screen. I figured I could run a for loop on the Player class with all Pawn objects on the screen checking whether or not Width*Height intersects with each other, but is there a more efficient way to do this? And if you do it this way, many of the transparent pixel inside the rectangular area will also be considered as collision as well. Is there a way to check for collision between Bitmap on a Canvas that disregard transparent pixels? The class for player is below and the Pawn class uses the same method of display. Class Player { private Resources res; // Used for referencing Bitmap from predefined location private Bounds bounds; // Class that holds the boundary of the screen private Bitmap image; private float x, y; private Matrix position; private int width, height; private float velocity_x, velocity_y; public Player (Resources resources, Bounds boundary) { res = resources; bounds = boundary; image = BitmapFactory.decodeResource(res, R.drawable.player); width = image.getWidth(); height = image.getHeight(); position = new Matrix(); x = bounds.xMax / 2; // Initially puts the Player in the middle of screen y = bounds.yMax / 2; position.preTranslate(x,y); } public void draw(Canvas canvas) { canvas.drawBitmap(image, position, null); } }

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • Lwjgl camera causing movement to be mirrored

    - by pangaea
    I'm having a problem in that everything is rendered and the movement is fine. However, everything seems to be mirrored. In the sense that the TriangleMob should move towards me, but it doesn't instead it mirrors my action. I move forward the TriangleMob moves backwards. I move left, it moves right. I move backwards, it moves forward. The code works if I do this glPushMatrix(); glTranslatef(-position.x, -position.y, -position.z); glCallList(objectDisplayList); glPopMatrix(); However, I'm scared this will cause a problem later on. I suppose the code works. However, shouldn't the call be glPushMatrix(); glTranslatef(position.x, position.y, position.z); glCallList(objectDisplayList); glPopMatrix(); I think the problem could be caused by how I'm doing the camera, which is this glLoadIdentity(); glRotatef(player.getRotation().x, 1.0f, 0.0f, 0.0f); glRotatef(player.getRotation().y, 0.0f, 1.0f, 0.0f); glRotatef(player.getRotation().z, 0.0f, 0.0f, 1.0f); glTranslatef(player.getPosition().x, player.getPosition().y, player.getPosition().z);

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >