Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 622/1295 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • camera movement along with model

    - by noddy
    I am making a game in which a cube travels along a maze with the motive of crossing the maze safely. I have two problems in this. The cube needs to have a smooth movement like it is traveling on a frictionless surface. So could someone help me achieve this. I need to have this done in a event callback function I need to move the camera along with the cube. So could someone advice me a good tutorial about camera positions along with an object?

    Read the article

  • LibGDX onTouch() method kill on touch

    - by johnny-b
    How can I add this on my application. i want to use the onTouch() method from the implementation of the InputProcessor to kill the enemies on screen. how do i do that? do i have to do anything to the enemy class? please help Thank you M @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { return false; } here is my enemy class public class Bullet extends Sprite { private Vector2 velocity; private float lifetime; public Bullet(float x, float y) { velocity = new Vector2(0, 0); } public void update(float delta) { float targetX = GameWorld.getBall().getX(); float targetY = GameWorld.getBall().getY(); float dx = targetX - getX(); float dy = targetY - getY(); float distToTarget = (float) Math.sqrt(dx * dx + dy * dy); velocity.x += dx * delta; velocity.y += dy * delta; } } i am rendering all graphics in a GameRender class and a gameworld class if you need more info please let me know Thank you

    Read the article

  • XNA 2D Rotated Rectangle Collision Response

    - by Kyle Uithoven
    I am using Rotated Rectangles which collide using the Separating Axis Theorem and they work perfectly fine for collision detection using Intersects and Contains. However, I am starting to use faster objects in my game now and there is the issue of the two object overlapping during collision due to their higher velocities. I would like to do a collision response where I find out how much they are overlapping in the X and Y and put position them outside of each other. I would like to use something like this: http://go.colorize.net/xna/2d_collision_response_xna/index.html. But I am having some issues trying to adapt this to handle the rotation of the bounds. Is this possible? Are there any resources out there that I can look at?

    Read the article

  • Is OpenGL 1.x deprecated?

    - by QuasarDonkey
    I'm familiar with OpenGL 1.x. I typically use SDL with OpenGL 1.4 on Linux, and I've never run into problems, even on my modern system. I've read on the OpenGL site about deprecation and compatibility contexts, but I'm still unclear as to whether it's safe to continue to use old versions of OpenGL, as opposed to using old features in newer versions. When functionality is marked deprecated ... future versions of OpenGL may remove it. Does deprecation simply imply that those functions can't be used alongside newer features? More specifically, are there any systems today (other than embedded) where OpenGL 1.x isn't available? The old-skool stuff like, glBegin, glEnd, glDrawPixels, etc. Note: I'm not a professional games developer, so you'll have to excuse my ignorance. I'm working on a mostly 2D game that I would like to keep multi-platform, supporting at least Linux, Mac, and Windows.

    Read the article

  • Adding gesture recognizer (or dragging) to CCSprite

    - by user339946
    I'm trying to allow a CCSprite to be dragged across the screen. I've succeeded so far by doing it on a Layer level (from this tutorial). However, this only allows ONE sprite to be dragged at a time as the method implementation can only identify a single sprite to move at a time. I'd like to be able to perhaps add a gesture recognizer or somehow implement ccTouchesBegan/Moved in my own little CCSprite subclass. However, from what I understand, you can't just add gesture recognizers to CCSprites. ccTouchMoved are also not available on CCSprites?? Really confused as to how to implement touches on Cocos2D. What is the easiest way to add some position translation code to a CCSprite so it can be dragged around? Thanks!

    Read the article

  • Cocos2D Command-Line Application

    - by Hasyimi Bahrudin
    Is it possible to create a terminal application which uses cocos2d? I've tried to make one using cocos2d 2.x, but it requires a MacGLView to be initialized. I need it so that I could program a terminal application that generates a screenshot given a TMX file and an optional preferred width or height parameter (for resizing). Then I can automate the generation of map previews for my game, instead of manually taking screenshots. It's not practical to load the actual TMX and resize it inside the game (what I'm currently doing), because each TMX file has 7 layers, my tile sheet is huge, and I have lots of levels.

    Read the article

  • How do I create an efficient long, pannable, sprite-animated scene in a Windows Store game?

    - by Groo
    I am creating my first Windows Store application in XAML, and I cannot seem to find a proper example for the requirements I have. The basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Help with Meshes, and Shading

    - by Brian Diehr
    In a game I'm making in LibGdx, I wish to incorporate a ripple effect. I'm confused as to how I get access to the individual pixels on the screen, or any way to influence them (apart from what I can do with sprite batch). I have my suspicions that I have to do it through openGL, and it has something to do with apply a mesh? This brings me to my question, what exactly is a mesh? I've been working on my game for about half a year, and am doing great with the other aspects of the game, but I find this more advance stuff isn't as well documented. Thanks!

    Read the article

  • What is the minimum of shader I need to use to run basic calculation on GPU?

    - by Jinxi
    I read, that the Hull Shader, Domain Shader, Geometry Shader and Pixel Shader can be used optional. So, is the Vertex Shader optional too? If no: What does a basic Vertex Shader look like? Just like a simple pass through? Is the Vertex Shader necessary to tell what kind of datastructure (Van Stripes or Meshes) are used? What can I do, with just the vertex shader? Are the fixed functions working without any help of programming a programmable stage?

    Read the article

  • Using PhysX, how can I predict where I will need to generate procedural terrain collision shapes?

    - by Sion Sheevok
    In this situation, I have terrain height values I generate procedurally. For rendering, I use the camera's position to generate an appropriate sized height map. For collision, however, I need to have height fields generated in areas where objects may intersect. My current potential solution, which may be naive, is to iterate over all "awake" physics actors, use their bounds/extents and velocities to generate spheres in which they may reside after a physics update, then generate height values for ranges encompassing clustered groups of actors. Much of that data is likely already calculated by PhysX already, however. Is there some API, maybe a set of queries, even callbacks from the spatial system, that I could use to predict where terrain height values will be needed?

    Read the article

  • Can GJK be used with the same "direction finding method" every time?

    - by the_Seppi
    In my deliberations on GJK (after watching http://mollyrocket.com/849) I came up with the idea that it ins not neccessary to use different methods for getting the new direction in the doSimplex function. E.g. if the point A is closest to the origin, the video author uses the negative position vector AO as the direction in which the next point is searched. If an edge (with A as an endpoint) is closest, he creates a normal vector to this edge, lying in the plane the edge and AO form. If a face is the feature closest to the origin, he uses even another method (which I can't recite from memory right now) However, while thinking about the implementation of GJK in my current came, I noticed that the negative direction vector of the newest simplex point would always make a good direction vector. Of course, the next vertex found by the support function could form a simplex that less likely encases the origin, but I assume it would still work. Since I'm currently experiencing problems with my (yet unfinished) implementation, I wanted to ask whether this method of forming the direction vector is usable or not.

    Read the article

  • XNA Spritebatch sorting by texture vs depth

    - by Motig
    I am refining my 2D game engine, and I want to look in to sorting sprite batches by texture (because I'm quite often using the same textures repeatedly). However, I also want to retain a few 'layers' of depth (i.e. ground < buildings < units < GUI etc). My question is, which of the following is the best approach (in terms of performance)? Create multiple SpriteBatches and Begin() and End() them in order; or... Create a single SpriteBatch and call Begin() and End() multiple times, once for each layer (in order)?

    Read the article

  • What's the closest thing to Apple's SpriteKit on Android devices? [on hold]

    - by Krumelur
    I've been playing around with the iOS 7 SpriteKit APIs and I totally love them. As I'm pretty much a n00b on Android, I'm wondering what the best Alternative would be if I wanted to go cross platform? I find Cocos2D learning curve pretty steep, where with SpriteKit it's a matter of minutes to get something on the screen. Then there's MonoGame and Cocos 2D for MonoGame - haven't tried either one I must admit.

    Read the article

  • Top Down bounds of vision

    - by Rorrik
    Obviously in a first person view point the player sees only what's in front of them (with the exception of radars and rearview mirrors, etc). My game has a top down perspective, but I still want to limit what the character sees based on their facing. I've already worked out having objects obstruct vision, but there are two other factors that I worry would be disorienting and want to do right. I want the player to have reduced peripheral vision and very little view behind them. The assumption is he can turn his head and so see fairly well out to the sides, but hardly at all behind without turning the whole body. How do I make it clear you are not seeing behind you? I want the map to turn so the player is always facing up. Part of the game is to experience kind of a maze and the player should be able to lose track of North. How can I turn the map rather than the player avatar without causing confusion?

    Read the article

  • OpenGLES 2.0 gluunProject

    - by secheung
    I've spent more time than i should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using opengl es 2.0 code. Thanks

    Read the article

  • What is causing this behavior with the movement of Pong Ball in 2D? [closed]

    - by thegermanpole
    //edit after running it through the debugger it turned out i had the display function set to x,x...TIL how to use a debugger I've been trying to teach myself C++ SDL with the lazyfoo tutorial and seem to have run into a roadblock. The code below is the movement function of my Dot class, which controls the ball. The ball seems to ignore yvel and moves with xvel to the bottom right. The code should be pretty readable, the rest of the relevant facts are: All variables are names Constants are in caps dotrad is the radius of my dot yvel and xvel are set to 5 in the constructor The dot is created at x and y equal to 100 When I comment out the x movement block it doesn't move, but if i comment out the y movement block, it keeps on going down to the right. void Dot::move() { if(((y+yvel+dotrad) <= SCREEN_HEIGHT) && (0 <= (y-dotrad+yvel))) { y+=yvel; } else { yvel = -1*yvel; } if(((x+xvel+dotrad) <= SCREEN_WIDTH) && (0 <= (x-dotrad+xvel))) { x +=xvel; } else { xvel = -1*xvel; } }

    Read the article

  • Should I drawing directly on CCLayer or CCSprite?

    - by einverne
    Now I am a little confused in my cocos2d-x cpp project. I want to draw lines with user's finger touch. Following the screenshot of a CCScene: In the screen, there are two squares. I want show an animation in the first square and let the second one draw lines with user touch. Now these two squares are CCSprite. And I can draw dots in the second one on the CCLayer. But I am little confused that I should draw lines on the Sprite or on the Layer. Or are there other ways to organize the code?

    Read the article

  • How can I perform a masked erase in SDL2?

    - by Kvisle
    I'm trying to implement some shadow/lighting effects in my 2D-project, and I've concluded that if there is an easy way to perform a masked erase on an SDL_Texture, it would make the drawing operations quite cheap. Let's say I have a texture of the part of the level where light is not meant to be rendered. I also have a texture with my "light map"; I want to use this to just draw omni lights from my light sources. Then I want to use the first image to 'subtract' the portions of the light map that are not to be rendered on the final scene. Then I draw my "light map" texture on top of my scene, with additive blending enabled. This sounds like a good theory in my head, but I can't see any functions in the SDL2 API that let me do masked erase from a texture. Am I overlooking something? Does anything like this exist?

    Read the article

  • How do I rotate a sprite with ccbezierTo in cocos2d-x?

    - by user1609578
    In cocos2d-x, I move a sprite with ccbezierTo like this: // use for ccbezierTo bezier.controlPoint_1 = ccp(m_fish->getPositionX() + 200, visibleSize.height/2 + 300); bezier.controlPoint_2 = ccp(m_fish->getPositionX() + 400, visibleSize.height/2 - 300); bezier.endPosition = ccp(m_fish->getPositionX() + 600,visibleSize.height/2); bezier1.controlPoint_1 = ccp(m_fish->getPositionX() + 800, visibleSize.height/2 + 300); bezier1.controlPoint_2 = ccp(m_fish->getPositionX() + 1000, visibleSize.height/2 - 300); bezier1.endPosition = ccp(m_fish->getPositionX() + 1200,visibleSize.height/2); bezierForward = CCBezierTo::create(6, bezier); nextBezier = CCBezierTo::create (6,bezier1); m_fish->runAction(CCSequence::create( bezierForward, nextBezier, NULL)); How can I make my sprite rotate while moving it with CCBezierTo?

    Read the article

  • Is there a good reason I shouldn't use a java applet for a game?

    - by ryeguy
    I want to make a multiplayer browser-based game. The nice thing about using an applet is that I can make the client and the server in the same language (java/closure/scala/etc). I know there's html5 and javascript, but server side javascript isn't as mature as the jvm platform and browser support is still kind of flaky. Applets don't seem to be widely used (except for Runescape), but is there a reason they're unsuitable or is it just because of the bad reputation they developed in their infancy?

    Read the article

  • Why does this exported cube have too many vertices?

    - by Joewsh
    I'm trying to export md5mesh models. Just as a test I decided to export a simple cube (i.e. with 8 vertices). When I opened the .md5mesh file it lists the following: numverts 24 numtris 12 numweights 24 Obviously the number of triangles makes sense: 6 faces * 2 to triangulate = 12. The model only has one bone so again it even makes sense that there is one weight for each vertex. The question is though, why is the file listing 24 vertices? Is the problem the exporter or is this normal for md5mesh's? Is it something that you have to rectify when you come to parsing the file in engine? I don't want to be parsing or drawing duplicated vertices without reason. I'm guessing it's something to do with shading and normals. Is it a case of listing each vert 3 times, one for each facing normal?

    Read the article

  • What technology to use for turn-based game server? [closed]

    - by mekanikus
    If I create a mobile turn-based game which could consist from 2 up to 6 players. I expect the server to support for about hundreds of game. And I aim for something free and not costly technology. What software and hardware recommendation would be enough for me? I've found some game server engine such as photon. But is it too much for a simple game? I'm thinking of using REST technology with F3 PHP server and Mysql. Will it be adequate? Will only one physical server enough? What is the hardware recommendation for the server?

    Read the article

  • Drawing "Stenciled" Sprites and making them glow

    - by Code Assassin
    Currently, in my game - I'm not using XNA's SpriteBatch to render anything(I am using Farseer Physic's Debug View), and I was wondering how I would render something like this: only using XNA. My second question is once I have drawn these stenciled sprites , how would I give the "stenciled" lines a glow effect like so: I haven't done anything like this before so It is a very confusing experience for me. Any pointers?

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >