Search Results

Search found 14551 results on 583 pages for 'game history'.

Page 357/583 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • What is the correct way to implement hit detection with non-rectangular sprites?

    - by hogni89
    What is the correct way to implement hit or touch detection for non-rectangular sprites in Cocos2d? I am working on a jigsaw puzzle, so our sprites have some strange forms (jigsaw puzzle bricks). As of now, we have implemented the "detection" this way: - (void)selectSpriteForTouch:(CGPoint)touchLocation { CCSprite * newSprite = nil; // Loop array of sprites for (CCSprite *sprite in movableSprites) { // Check if sprite is hit. // TODO: Swap if with something better. if (CGRectContainsPoint(sprite.boundingBox, touchLocation)) { newSprite = sprite; break; } } if (newSprite != selSprite) { // Move along, nothing to see here // Not the problem } } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; [self selectSpriteForTouch:touchLocation]; return TRUE; } I know that the problem is in the keyword "sprite.boundingBox". Is there a better way of implementing this, or is it a limitation when using sprites based on .png's? If so, how should I proceed?

    Read the article

  • Circular movement - eliminating speed ups near Y = 0

    - by Fibericon
    I have a basic algorithm to rotate an enemy around a 200 unit radius circle with center 0. This is how I'm achieving that: if (position.Y <= 0 && position.X > -200) { position.X -= 2; position.Y = 0 - (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } else { position.X += 2; position.Y = (float)Math.Sqrt((200 * 200) - (position.X * position.X)); } It does work, and I've ensured that at no point does either X or Y equal NaN. However, when Y approaches 0, it seems to go significantly faster. This surprises me, because the Y values are locked to the X, which is being incremented by a steady amount. What can I do to smooth the speed?

    Read the article

  • Following a set of points?

    - by user1010005
    Lets assume that i have a set of path that an entity should follow : const int Paths = 2 Vector2D<float> Path[Paths] = { Vector2D(100,0),Vector2D(100,50) }; Now i define my entity's position in a 2D vector as follows : Vector2D<float> FollowerPosition(0,0); And now i would like to move the "follower" to the path at index 1 : int PathPosition = 0; //Start with path 1 Currently i do this : Vector2D<float>& Target = Path[PathPosition]; bool Changed = false; if (FollowerPosition.X < Target.X) FollowerPosition.X += Vel,Changed = true; if (FollowerPosition.X > Target.X) FollowerPosition.X -= Vel,Changed = true; if (FollowerPosition.Y < Target.Y) FollowerPosition.Y += Vel;,Changed = true; if (FollowerPosition.Y > Target.Y) FollowerPosition.Y -= Vel,Changed = true; if (!Changed) { PathPosition = PathPosition + 1; if (PathPosition > Paths) PathPosition = 0; } Which works except for one little detail : The movement is not smooth!! ...So i would like to ask if anyone sees anything wrong with my code. Thanks and sorry for my english.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • How to texture voxel terrain without triplanar texturing?

    - by Thelvyn
    How can a voxel terrain (marching cubes) be textured without triplanar mapping ? The goal being to have more artistic freedom. I think, I could unwrap the mesh while extracting the isosurface then use projective painting. But I do not know how to handle terrain modifications without breaking the texture. I also guess that virtual texturing could help here. Links for these matters would be appreciated.

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • Particles are not moving correctly [closed]

    - by cr33p
    I want to make a particle explosion, after something gets destroyed, but somehow only one line of mixed colors show up on the screen. Here's the header: http://pastebin.com/JW5bPLj2 Here's the source: http://pastebin.com/KHmFqytD I don't get what's wrong, as it's nearly the same as in "Programming Linux Games" Can somebody help me fix that? PS: "Uint32 delta" is needed to update the pixels based on time. PSS: Maybe I should add that it's programmed in C and includes SDL. EDIT: Found the problem. It was the "drawParticles" function. The problem was, that I passed a double to "offset" (as particles[i].x, etc are all doubles). So I ended up with values like ~MAX_INT because I didn't cast the doubles properly to ints.

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Sprite rotation

    - by Kipras
    I'm using OpenGL and people suggest using glRotate for sprite rotation, but I find that strange. My problem with it is that it rotates the whole matrix, which sort of screws up all my collision detection and so on and so forth. Imagine I had a sprite at position (100, 100) and in position (100, 200) is an obstacle and the sprite's facing it. I rotate the sprite away from the obstacle and when move upwards my y axis, even though the projection shows like it's going away from the obstacle, the sprite will intersect it. So I don't see another way of a rotating a sprite and not screwing up all collision detection other than doing mathematical operations on the image itself. Am I right or am I missing something?

    Read the article

  • AndEngine player, background and camera

    - by valdemar593
    I'm developing a 2D shooter using AndEngine. At the moment I'm trying to make the camera follow the player. As I've understood the common approach is to use the SmoothCamera zooming it and setting the chased entity. The problem is that the camera follows the player WITH background moving also (RepeatingSpriteBackground), so it looks like the player doesn't move at all though the actual position changes. So I don't really get how to make the camera follow the player and have the background not moving. Thanks in advance.

    Read the article

  • What are some ways of making manageable complex AI?

    - by Tetrad
    In the past I've used simple systems like finite state machines (FSMs) or hierarchical FSMs to control AI behavior. For any complex system, this pattern falls apart very quickly. I've heard about behavior trees and it seems like that's the next obvious step, but haven't seen a working implementation or really tried going down that route yet. Are there any other patterns to making manageable yet complex AI behaviors?

    Read the article

  • Enemies don't shoot. What is wrong? [closed]

    - by Bryan
    I want that every enemy shoots independently bullets. If an enemy’s bullet left the screen, the enemy can shoot a new bullet. Not earlier. But for the moment, the enemies don't shoot. Not a single bullet. I guess their is something wrong with my Enemy class, but I can't find a bug and I get no error message. What is wrong? public class Map { Texture2D myEnemy, myBullet ; Player Player; List<Enemy> enemieslist = new List<Enemy>(); List<Bullet> bulletslist = new List<Bullet>(); float fNextEnemy = 0.0f; float fEnemyFreq = 3.0f; int fMaxEnemy = 3 ; Vector2 Startposition = new Vector2(200, 200); GraphicsDeviceManager graphicsDevice; public Map(GraphicsDeviceManager device) { graphicsDevice = device; } public void Load(ContentManager content) { myEnemy = content.Load<Texture2D>("enemy"); myBullet = content.Load<Texture2D>("bullet"); Player = new Player(graphicsDevice); Player.Load(content); } public void Update(GameTime gameTime) { Player.Update(gameTime); float delta = (float)gameTime.ElapsedGameTime.TotalSeconds; for(int i = enemieslist.Count - 1; i >= 0; i--) { // Update Enemy Enemy enemy = enemieslist[i]; enemy.Update(gameTime, this.graphicsDevice, Player.playershape.Position, delta); // Try to remove an enemy if (enemy.Remove == true) { enemieslist.Remove(enemy); enemy.Remove = false; } } this.fNextEnemy += delta; //New enemy if (fMaxEnemy > 0) { if ((this.fNextEnemy >= fEnemyFreq) && (enemieslist.Count < 3)) { Vector2 enemyDirection = Vector2.Normalize(Player.playershape.Position - Startposition) * 100f; enemieslist.Add(new Enemy(Startposition, enemyDirection, Player.playershape.Position)); fMaxEnemy -= 1; fNextEnemy -= fEnemyFreq; } } } public void Draw(SpriteBatch batch) { Player.Draw(batch); foreach (Enemy enemies in enemieslist) { enemies.Draw(batch, myEnemy); } foreach (Bullet bullets in bulletslist) { bullets.Draw(batch, myBullet); } } } public class Enemy { List<Bullet> bulletslist = new List<Bullet>(); private float nextShot = 0; private float shotFrequency = 2.0f; Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public bool Shot; public Enemy(Vector2 Pos, Vector2 Move, Vector2 Player) { this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; this.Remove = false; this.Shot = false; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, Vector2 PlayerPos, float delta) { nextShot += delta; for (int i = bulletslist.Count - 1; i >= 0; i--) { // Update Bullet Bullet bullets = bulletslist[i]; bullets.Update(gameTime, graphics, delta); // Try to remove a bullet... Collision, hit, or outside screen. if (bullets.Remove == true) { bulletslist.Remove(bullets); bullets.Remove = false; } } if (nextShot >= shotFrequency) { this.Shot = true; nextShot -= shotFrequency; } // Does the enemy shot? if ((Shot == true) && (bulletslist.Count < 1)) // New bullet { Vector2 bulletDirection = Vector2.Normalize(PlayerPos - this.vPos) * 200f; bulletslist.Add(new Bullet(this.vPos, bulletDirection, PlayerPos)); Shot = false; } if (!Remove) { this.vMove = Vector2.Normalize(PlayerPos - this.vPos) * 100f; this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth + 1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight + 1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch batch, Texture2D myTexture) { if (!Remove) { batch.Draw(myTexture, this.vPos, Color.White); } } } public class Bullet { Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public Bullet(Vector2 Pos, Vector2 Move, Vector2 Player) { this.Remove = false; this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, float delta) { if (!Remove) { this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth +1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight +1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch spriteBatch, Texture2D myTexture) { if (!Remove) { spriteBatch.Draw(myTexture, this.vPos, Color.White); } } }

    Read the article

  • Sprite animation problem

    - by hustlerinc
    I have this sprite I have to animate. The sprite is 7 images total but animation is 10 frames (2 positions are repeated). The order I want to go through the frames is like this: 3 - 4 - 5 - 6 - 4 - 3 - 2 - 1 - 0 - 2. My problem is how can I skip 1 frame once I reach the end of each direction? The reason I want to skip is to save me from creating duplicate positions in the spritesheet. I'm doing this in Javascript.

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • Scrolling background with changing textures

    - by Simran kaur
    I have the 2 cubic structures that are my tracks and are scrolling basically to give effect of movement on object. In my OnBecameInvisible() method, I have changed their Tiling using mainTextureScale void OnBecameInvisible() { renderer.material.mainTextureScale = new Vector2(1, numberOfLanes); this.transform.position = new Vector3(this.transform.position.x, this.transform.position.y, 20.0f); } The tiling works fine. But the alternative tracks have their Tiling set to 0 which is giving an undesirable effect. Requirement: I want to be able to set the Tiling of every track that is visible on the screen. How do I do it?

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • Why are my sprite sheet's frames not visible in Cocos Builder?

    - by Ramy Al Zuhouri
    I have created a sprite sheet with zwoptex. Then I just dragged the .plist and .png files to my Cocos Builder project. After this I wanted to take a sprite frame and set it to a sprite: But the sprite image is empty! At first I thought it was just empty in Cocos Builder, and that it must have been working correctly when imported to a project. But if I try to run that scene with CCBReader I still see an empty sprite. Why?

    Read the article

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >