Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 508/1758 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Identify which CCSprite is touched in Cocos2d

    - by PeterK
    I am trying to learn Cocos2d and is experimenting with Ray Wenderlich tutorial whack-a-mole: www.raywenderlich.com/2560/how-to-create-a-mole-whacking-game-with-cocos2d-part-1 In this tutorial three CCSprite's are popping up and you should click on them... However, i am trying to identify which mole, rat in my case, is popping up and place a CCSprite above that. Initially this looked like an easy task but i am failing. I am trying to NSLog LEFT HIT. i would guess the problem is in the If-statement and the last "227" height parameter. The left rat boundingBox = {{99.5, 146.5}, {165, 227}} (from NSLog). The key code is in the ccTouchBegan function: -(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; for (CCSprite *rat in rats) { if (rat.userData == FALSE) continue; if (CGRectContainsPoint(rat.boundingBox, touchLocation)) { //left: rat boundingBox = {{99.5, 146.5}, {165, 227}} //mid: rat boundingBox = {{349.5, 146.5}, {165, 227}} //right: rat boundingBox = {{599.5, 146.5}, {165, 227}} //>>>>Here is where i try to get a hit<<<< if (CGRectContainsPoint(CGRectMake(99.5, 146.55, 165, 227), touchLocation)) { NSLog(@">>>>HIT LEFT<<<<<"); } I would really appreciate a few ideas how to get this to work.

    Read the article

  • Rendering performance in FlasCC + UDK when compared to Stage3d and UDK on Windows?

    - by Arthur Wulf White
    Adobe recently released the Flash C++ Compiler, which UDK uses to target Flash Player. Developers can now access UDK for browser applications. Does this mean greater performance than using a Stage3D engine (Away3D 4) and how much of a noticeable difference in performance would it make in rendering speeds? Is there any benchmark you could propose that would allow to compare them fairly? I am asking this to help myself understand the consequences in performance for deciding to use UDK in a browser based game. I would also like to know how it compares with UDK running natively in Windows? I am not asking which technology to use or which is better. Only interested in optimizing rendering speed in a 3d browser game with flash.

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Problems in exporting terrain from autodesk 3ds

    - by Jatin Kumar
    i am trying to make small counter strike sort of game and for the terrain part i have exported the terrain in 3ds format from Autodesk 3ds-max and imported the same in opengl using lib3ds. Its working fine but with few problems: The terrain is mainly made up of some cubical boxes with texture on them and placed on a big flat surface with boundary wall. In opengl i have enabled anti aliasing but still there is too much aliasing on the boundaries (visible when rotating the camera). I have tiled the floor with some image but in opengl it is just the single image stretched over the complete surface. I have exported animated model (Skelton+mesh+material+animation) from 3ds and used cal3d library for reading the same. Model has a gun also which is not appearing in opengl and it too has too much of aliasing problem. I have googled around but couldn't find any relevant solutions. Thanks in advance

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • How to Hide the code of HTML5 games [closed]

    - by jeyanthinath
    Possible Duplicate: HTML5 game obfuscation I am begin to develop games in HTML5 and I had doubt that , when we use the game in online its source can be visible to others even if we use complex code and reference to java-script files , then what is the use of HTML5 even everyone can be able to download the code and still use their updated version Is it possible to hide the code of HTML5 in web page games OR there some other way it can made it not visible to the users !!! If not what is the use of HTML5 as it is open to user as well !!!

    Read the article

  • Is there an algorithm for a pool game?

    - by Dmitri
    Hello! I am looking for algorithm to calculate direction and speed of balls in a pool game. I am sure there has to be some type of open source code for this since pool games are some of the oldest computer games I can remember. I mean, when one ball hits another, I need a algorithm to calculate direction of both of them. It will depend of exact angle of where they hit each other and on speed. I want to practice Java coding, so I am looking for java code or package that has this type of code.

    Read the article

  • Advantages and disadvantages of libgdx [on hold]

    - by Paul
    I've been an android developer for a while and am thinking about getting into gaming. While looking for a game dev framework, I thought libgdx provides very friendly documentation and functionality. So I would like to use it if there is no big obstacle. But when I tried to see how many developers employ this library, I could find not that many. Is there anything wrong with this library? In other words, I would like to know its advantages or disadvantages from any experienced developer. UPDATE: After reviewing its documentations and trying to build simple games with libgdx, I decided to go with it as its documentations are good enough and its community is very active. What I liked the most is that it provides a bunch of demo games that I can learn a lot from.

    Read the article

  • How to efficiently render resizable GUI elements in DirectX?

    - by PolGraphic
    I wonder what would be most efficient way to render the GUI elements. When we're talking about constant-size elements (that can still be moving), the textures' atlas seems to be good. But what with the resizeable elements? Let's say the panel (with textured borders)? Is there any better way than just render 9 rectangles with textures on them (I guess one texture and different textures coordinates for left-top corner, border, middle etc. used in shader)?

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • What are the cons of using DrawableGameComponent for every instance of a game object?

    - by Kensai
    I've read in many places that DrawableGameComponents should be saved for things like "levels" or some kind of managers instead of using them, for example, for characters or tiles (Like this guy says here). But I don't understand why this is so. I read this post and it made a lot of sense to me, but these are the minority. I usually wouldn't pay too much attention to things like these, but in this case I would like to know why the apparent majority believes this is not the way to go. Maybe I'm missing something.

    Read the article

  • Basic tutorial/introduction for 3d matrices, idealy in c++, without openGl or directX

    - by René Nyffenegger
    I am wondering if there is a simple tutorial that covers the basics of how to initialize rotation, translation and projection matrices, and how to multiply them, and how to get the screen coordinates afterwards for a 3d point. Idealy, the tutorial comes with compilable code and is not dependent on any 3rd party library. Searching the internet, I found lots of tutorials, so this is not the problem. Yet, it seemed all of these either covered openGl or directX, or they were theoretical in nature.

    Read the article

  • Cropping a line (laser beam) in XNA

    - by electroflame
    I have a laser sprite that I wish to crop. I want to crop it so when it collides with an item, I can calculate the distance between the starting point, and the ending point, and only draw that. This eliminates the "overdraw" of a laser drawing past an item. Essentially, I'm trying to crop a line, but also keep that line "attached" to the nose of my ship. The line should not be drawn past the nose of my ship, that should be the starting point. There is no rotation to worry about. Currently, I thought that doing this through SpriteBatch would be best. This is my current Spritebatch code: spriteBatch.Draw(Laser.sprite, new Rectangle((int)Laser.position.X, (int)Laser.position.Y, Laser.sprite.Width, LaserHeight), new Rectangle(0, 0, (int)(Laser.sprite.Width), LaserHeight), new Color(255, 255, 255, (byte)MathHelper.Clamp(Laser.Alpha, 0, 255)), Laser.rotation, new Vector2(Laser.sprite.Width/2, LaserHeight/2), SpriteEffects.None, 0); But this doesn't quite work. It does only draw part of the sprite, but when LaserHeight is incremented, it lengthens the line in both ways! I believe this is due to some stupid error on my part with the Origin of the draw. Quick recap: I need to have my laser sprite drawn with the bottom of it at the nose of my ship, and then use LaserHeight to crop the image so only part of it is drawn. I have a feeling my explanation is a bit...lacking. So if you require more information, please say so and I will try to provide more information. Thanks in advance!

    Read the article

  • stack management in CLR

    - by enableDeepak
    I understand the basic concept of stack and heap but great if any1 can solve following confusions: Is there a single stack for entire application process or for each thread starting in a project a new stack is created? Is there a single Heap for entire application process or for each thread starting in a project a new stack is created? If Stack are created for each thread, then how process manage sequential flow of threads (and hence stacks)

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • 2D isometric: screen to tile coordinates

    - by Dr_Asik
    I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing: where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. My notation for coordinates is (y, x) which may be confusing, sorry about that. The best I could figure out so far is this: int xtemp = xs / (W / 2); int ytemp = ys / (H / 2); int xt = (xs - ys) / 2; int yt = ytemp + xt; This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this. Thanks!

    Read the article

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • I need to sell an almost-complete MMORPG project. How can I do that?

    - by Tomasz
    I need your help. We have to sell MMORPG at an advanced stage. The game has a unique engine, written on the need for the game, graphics, sound, map editor, web site etc. As it happens in the play mmorpg we can develop the characters, monsters. We can fight with other characters or to establish cooperation in solving the challenges. We can fight using own monsters, or throwing their own cards with spells. Unfortunately we have no idea how to promote the game. Ended fund and I think the whole team surrendered. How can I find a buyer? Where can I find him? Thank you for your help.

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >