Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 631/1295 | < Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >

  • Error when updating enumerated value?

    - by igrad
    Once upon a time, there was a Player class (simplified version) enum animState{RUNNING,JUMPING,FALLING,IDLING}; class Player { public: Player(int x, int y); void handle(); void show(); ~Player(); private: int m_x; int m_y; animState playerAnimState; } There was also a "handle" function-member, which took care of all movement and collisions for the player: #include "player.h" void Player::handle() { if(/*Player presses 'D' key*/) { m_x++; playerAnimState = RUNNING; } //Other stuff that is just there to look nice Through lots of experimentation with "//" and "/**/", I've found that I consistently get an error at "playerAnimState = RUNNING." Have I broken some enumeration rule? Does my laptop really suck that bad? I hate to post a "fix my code for me" question, but I'm not very seasoned with enums.

    Read the article

  • Communication between Box2D and libGDX Stage (Scene2D) running in separate threads

    - by atok
    I'm making a physics based 2D game using libGDX and Box2D. I want to move the execution of the simulation out of render thread. I use immutable messages and the BlockingQueue to pass the information about player actions. The Box2D applies forces and runs a frame of simulation. In the next step I would like to sync back the changes and update Scene2D Actors accordingly. Making an immutable copy of the state of the game world and sending it back using Gdx.app.postRunnable() is one option but it seems inefficient. Is there any other option?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • From release to business

    - by geneotech
    So let's say that I've finished programming a simple, indie MMO game similiar to Tibia. I've got a stable server application that is ready to launch, i've got a tested bug-free working client application that is ready to play and the game's official website (ready to host) with payment system and client that is ready to download for free. Let's say none of them break copyright laws, and no matter how impossible it sounds, let's for now say it's true. My game divides accounts into two groups - free and premium. If someone gets premium, he's granted access to all possible game features, that of course, need server authorisation to work properly. Let's say that the "premium account" can be bought on the website for a fixed money/month. Free accounts mean that everyone can actually play, but without paying, you get limited access. This is what the mentioned payment system will be for. Well, I'm completely novice to these business entities issues, so in short: what, in terms of law, are steps from here to the state where my game earns money in a fully legal way ? Also, is there for example, something like verification if game gives the user what it actually offers when paying on its website ? I live in Europe, if it changes something.

    Read the article

  • Assigning a different texture based on picking(XNA)

    - by Thomas Carmichael
    I'm making a game using XNA. I have some simple objects like cube and sphere, and I would like to change the texture of one face of these objects based on picking. That is, when the cursor is over one face, it turns red. The only way I've seen to do this is to overload the content processor as here: http://xbox.create.msdn.com/en-US/education/catalog/sample/picking_triangle but it seems like it shouldn't be this complicated. I'm using .x models, and would like to be able to implement this for more complex models in the future beyond cubes/spheres/etc. Is this the best/only way to go about it? I'll figure that out if that's what is necessary, but it seems that there would be a simpler solution to load a different texture to a face than I've seen, I just don't know what it is.

    Read the article

  • Scaling sprite velocity / co-ordinatesin Android

    - by user22241
    I'm trying to find the answer to a question that I've had for a long time, but am having trouble finding it! I hope someone can help :-) I'm trying to find information on how to scale sprite velocity / movement / co-ordinates. What I mean by this is how do I get a sprite to move at the same speed relative to the screen size / DPI so that it takes the same amount of real-time to get from one side of the screen to the other? All of the posts pertaining to sprite scaling that I can find on the various forums relate to the size of the sprite, but this part of it I'm OK with so far, it's just that when I move a sprite, it kind of gets there at different speed depending on the dpi / resolution of the device. I hope I'm making sense. This is the code I have so far, instead of using explicit amounts, like 1, I'm using something like the following: platSpeedFloat= (1 * (dpi/160)); //Use '1' so on an MDPI screen, the sprite will move by 1 physical pixel Then basically what I'm doing is something like this: (all varialble previously declared) platSpeedSave+=platSpeedFloat; //Add the platSpeedFloat value to the current platSpeedSave value platSpeed=(int) platSpeedSave; //Cast to int so it can be checked in the following statement if (platSpeed==platSpeedSave) //Check the casted int value to float value stored previoiusly {floorY=floorY-platSpeed; //If they match then change the Y value platSpeedSave=0;} //Reset Would be grateful if someone could assists - hope I'm making sense. The above doesn't seems to work the sprite moves 'faster' on lower DPI screens. Thanks

    Read the article

  • Why does my FPS drop gradually over time?

    - by mmankt
    I'm working on this game: yt alpha preview I came into a huge game-breaking problem - after 10-15 min of gameplay the FPS drops from 60 to 30 and is very unstable. I'm using tons of physics and particles, I'm deleting and nulling everything I can after it's supposed to be removed, I remove objects from vectors etc. The memory usage is stable at around 150mb so a leak is unlikely (or invisible?)- after a round ends and I delete everything, play a new round and performance is still terrible. I spent two days trying to figure this out and I just can't fix it. Maybe I'm missing something? I know it's hard to help with no code but I would just have to post my whole source.

    Read the article

  • Application window as polygon texture?

    - by nekome
    Is there a way, or method, to have some application rendered as texture in 3D scene on some polygon, and also have full interactivity with it? I'm talking about Windows platform, and maybe OpenGL but I guess it doesn't matter is it OGL or DX. For example: I run Calculator using WINAPI functions (preferably hidden, not showing on desktop) and I want to render it inside 3D scene on some polygon but still be able to type or click buttons and have it respond. My idea to realize this is to have WINAPI take screenshot (or render it to memory if possible) of that Calculator and pass it to OpenGL as texture for each frame (I'm experimenting with SDL through pygame) and for mouse interactivity to use coordination translation and calculate where on application window it would act, and then use WINAPI functions such as SetCursorPos to set cursor ant others to simulate click or something else. I haven't found any tutorials with topic similar to this one. Am I on a right track? Is there better way to do this if possible at all?

    Read the article

  • OpenGL, objects disappear at a certain z distance

    - by smoth190
    I'm writing a managed OpenGL library in C++, and I'm having a pretty annoying problem. Whenever I set an objects position to -2.0 or lower, the object disappears. And at distances 0 through -1.9, it doesn't appear to move away from them camera. I have a world matrix (which is multiplied by the objects position to move it), a view matrix (which is just the identity matrix currently) and a projection matrix, which is setup like this: FOV: 45.0f Aspect Ratio: 1 zNear: 0.1f zFar: 100.0f using GLMs glm::perspective method. The order the matrices are multiplied by in the shader is world, view, projection, then position. I can move the object along the X and Y axis perfectly fine. I have depth testing enabled, using GL_LEQUAL. I can change the actually vertices positions to anything I want, and they move away from the camera or towards it perfectly fine. It just seems to be the world matrix acting up. I'm using glm::mat4 for the world matrix, and glm::vec3 for positions. Whats going on here? I'm also using OpenGL 3.1, GLSL version 140 (1.4?).

    Read the article

  • Wait for Unity 4.3? [on hold]

    - by RoarG
    We are currently in a pre-production of developing a 2d game in Unity but with the arrival of Unity 4.3 around the corner we were contemplating waiting for the release 4.3 instead of starting on 4.2, mainly since 4.3 got native 2d support. Is this something to be worried about, or do we need to start over to take use of the advantages that 4.3 brings, or does it matter at all? What benefits does one get in Unity 4.3, considering developing a 2d game? and is it a lot of tweaking to rework if we start the project before 4.3 release?

    Read the article

  • Which is better a computer science degree or a Software engineer degree?

    - by Shadow wolf
    I'm asking here so since you all have experience in or around game programming, that's what I want to do, and I’m trying to find as many options as I can before my senior year, which is next year. So do you have any opinions on the matter of which would give me a better education in programming? Please no talking about anything other than the two degrees because I know of game programming degrees out there but I like to see which of these would provide the best alternate choice.

    Read the article

  • What are the Starting Games I need to make?[Best steps for a beginner Game Developer?] [closed]

    - by Man With Steel Nerves...
    Possible Duplicate: What are good games to “earn your wings” with? Hai, I'm new to the genre "Creating Games".Previously i had done only porting.I need some suggestion's for making a game. What are the basic game logics i need to start with? - Should i write Tic-Tac-Toe game? - Actually this seem very basic to me. I'm totally confused on where to start with.I like to create big games but after starting i feel the game is too heavy to handle. Can any one list out the basic needs of a Game Play programmer? I don't mind using any platform (Flash,c++,objective-c) but i need to know what are the game logic's i need to know before i start a big game.

    Read the article

  • How can I efficiently create/store/implement animations as I add to my game?

    - by nickbadal
    My game's characters are made up of different parts (head/body/legs/etc), and whatever items they have equipped. As I'm creating the animation system for my game, I want to try to anticipate a large number of combinations for different pieces for each character. Originally, I had planned on having a frame-by frame animation for each piece for each animation, and then layer them to combine them into a character, but this seems like it would be a lot of work for my artist, and that the memory/disk size would start to add up as well since we would need a sprite for every frame, of every customization of every piece, in every animation, for every character. What efficient ways are there to create/implement these animations as we add more and more configurations to our game?

    Read the article

  • How to properly code in Unity? [on hold]

    - by Vincent B.
    I'm fairly new to Unity (yet I touched it and made a few proto with it) and I'd like to know how I'm supposed to work with it. I'm student in programming so I'm used to C/C++ with SDL/SFML, writing code and only using Input/Graphics/Network libs. I followed a few Unity guides and it was way more around drag & drop on scenes and a bit of scripting to activate it all, which disturbed me. So I fond a way to only use one GameObject and use a Singleton to launch code and display stuff (for 2d games at least). At the end of the day I make games not using "Instantiate" or such at all. Is it the right way ? Am I supposed to do this ? How much are your scenes populated (in a professional environment) ? When should I stop coding and start using the editor ?

    Read the article

  • Why are prefab customizations being applied to only a single character? [on hold]

    - by m0rgul
    I've built my (networked) game to the point where I have a room, some characters and a character customization screen. In the character customization screen I get to chose gender, chose from three different wardrobe options and assign custom colors to these wardrobe items. Then I use a custom object to store these options, serialize them and store them in PlayerPrefs, then load the next scene and apply them to my chosen character in this scene. Then I go and spawn some more characters, customize them, etc. The problem is that my character is always customized, but all other characters in the scene are default copies of the prefab. The prefabs themselves are a generic male and a generic female, both with three different wardrobes to chose from (they are included in the prefab). When I spawn my character, I go to the saved options in PlayerPrefs, destroy the two discarded wardrobes, apply color to my chosen wardrobe and then spawn the character. How would it be possible to make the other characters also show up in their customized form rather than just the generic prefab (which shows all three wardrobes at the same time)?

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • c++ and SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • Easy Method to Change Color on UI Elements

    - by A13X
    This isn't a language-specific thing as far as I'm concerned. I was wondering what may be a quick way to change the COLOR of a certain on-screen element such as a button and its associated text. I would assume there is a trick to making a graphics engine so maybe individuals pixels or groups of sprites can have their colors easily shifted. A lot of game interface buttons and such have this so you know when an event like a click has occurred. Any pseudo code would be helpful and I am working in Android (not XML fluff), but again, this probably is not a very specific question, just an inquiry on how to go about this.

    Read the article

  • Using git (or any version control) regarding migration from one language to another [on hold]

    - by Max Benin
    I'm polishing an old project that i released some years ago, and the main purpose on that, is to arrange some folder structures and port the entire code from actionscript to haxe. All the game features, assets and design will remain the same. I have some doubts regarding versioning the project in this circumstance. Assuming that the only thing that will be drastically changed is the code migration, is it correct maintaining the new project changes on the same repository ? I was thinking in tagging it something like V1.1 or Branch the entire project. But i'm afraid that i'm gonna deviate from the versioning patterns. How can i use this version control issue in the best practice way ? Thanks.

    Read the article

  • Algorithm to zoom a plotted function

    - by astinx
    I'm making a game in android and I need plot a function, my algorithm is this: @Override protected void onDraw(Canvas canvas) { float e = 0.5f; //from -x axis to +x evaluate f(x) for (float x = -z(canvas.getWidth()); x < z(canvas.getWidth()); x+=e) { float x1,y1; x1 = x; y1 = f(x); canvas.drawPoint((canvas.getWidth()/2)+x1, (canvas.getHeight()/2)-y1, paintWhite); } super.onDraw(canvas); } This is how it works. If my function is, for example f(x)=x^2, then z(x) is sqrt(x). I evaluate each point between -z(x) to z(x) and then I draw them. As you can see I use the half of the size of the screen to put the function in the middle of the screen. The problem isn't that the code isn't working, actually plots the function. But if the screen is of 320*480 then this function will be really tiny like in the image below. My question is: how can I change this algorithm to scale the function?. BTW what I'm really wanting to do is trace a route to later display an animated sprite, so actually scale this image doesnt gonna help me. I need change the algorithm in order to draw the same interval but in a larger space. Any tip helps, thanks! Current working result Desired result UPDATE: I will try explain more in detail what the problem is. Given this interval [-15...15] (-z(x) to z(x)) I need divide this points in a bigger interval [-320...320] (-x to x). For example, when you use some plotting software like this one. Although the div where is contain the function has 640 px of width, you dont see that the interval is from -320 to 320, you see that the interval is from -6 to 6. How can I achieve this?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • What is "top new free" on GooglePlay

    - by Lumis
    On Android Market i.e. GooglePlay, there used to be a page with the latest new games. So every game had a chance to get noticed and make its way up especially if it was good. But now I see "top new free" page and no more the latest apps. I don't understand how can be "top new" Anybody knows how this works? If there are no more pages with the very latest uploaded games then the new apps will be barely seen to exist even if they are excellent, and new programmers have very little chance of getting noticed. Any good advice how to promote a new Android app these days?

    Read the article

  • How do I find a unit vector of another in Java?

    - by Shijima
    I'm writing a Java formula based on this tutorial: 2-D elastic collisions without Trigonometry. I am in the section "Elastic Collisions in 2 Dimensions". Part of step 1 says: Next, find the unit vector of n, which we will call un. This is done by dividing by the magnitude of n. My below code represents the normal vector of 2 objects (I'm using a simple array to represent the normal vector). int[] normal = new int[2]; normal[0] = ball2.x - ball1.x; normal[1] = ball2.y - ball1.y; I am unsure what the tutorial means by dividing the magnitude of n to get the un. What is un? How can I calculate it with my Java array?

    Read the article

  • Rotate triangle so that its tip points in the direction of the point on the screen that we last touched

    - by Sid
    OpenGL ES - Android. Hello all, I am unable to rotate the triangle accordingly in such a way that its tip always points to my finger. What i did : Constructed a triangle in by GL.GL_TRIANGLES. Added touch events to it. I can rotate the triangle along my Z-axis successfully. Even made the vector class for it. What i need : Each time when I touch the screen, I want to rotate the triangle to face the touch point. Need some help. Here's what i implemented. I wonder that where i am going wrong? My code : public class Graphic2DTriangle { private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private float[] vertices = { -1.0f,-1.0f, 0.0f, 2.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f }; private byte[] indices = { 0, 1, 2 }; public Graphic2DTriangle() { ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert byte buffer to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind // Setup index-array buffer. Indices in byte. indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } My SurfaceView class where i've done some Touch Events. public class BallThrowGLSurfaceView extends GLSurfaceView{ MySquareRender _renderObj; View _viewObj; float oldX,oldY,dX,dY; final float TOUCH_SCALE_FACTOR = 0.6f; Vector2 touchPos = new Vector2(); float angle=0; public BallThrowGLSurfaceView(Context context) { super(context); // TODO Auto-generated constructor stub _renderObj = new MySquareRender(context); this.setRenderer(_renderObj); this.setRenderMode(RENDERMODE_WHEN_DIRTY); } @Override public boolean onTouchEvent(MotionEvent event) { // TODO Auto-generated method stub touchPos.x = event.getX(); touchPos.y = event.getY(); Log.i("Co-ord", touchPos.x+"hh"+touchPos.y); switch(event.getAction()){ case MotionEvent.ACTION_MOVE : dX = touchPos.x - oldX; dY = touchPos.y - oldY; if(touchPos.y > getHeight()/2){ dX = dX*-1; } if(touchPos.x < getWidth()/2){ dY = dY*-1; } _renderObj.mAngle += (dX+dY) * TOUCH_SCALE_FACTOR; requestRender(); Log.i("AngleCo-ord", _renderObj.mAngle +"hh"); } oldX = touchPos.x; oldY = touchPos.y; Log.i("OldCo-ord", oldX+" hh "+oldY); return true; } } Last but not the least. My vector2 class. public class Vector2 { public static float TO_RADIANS = (1 / 180.0f) * (float) Math.PI; public static float TO_DEGREES = (1 / (float) Math.PI) * 180; public float x, y; public Vector2() { } public Vector2(float x, float y) { this.x = x; this.y = y; } public Vector2(Vector2 other) { this.x = other.x; this.y = other.y; } public Vector2 cpy() { return new Vector2(x, y); } public Vector2 set(float x, float y) { this.x = x; this.y = y; return this; } public Vector2 set(Vector2 other) { this.x = other.x; this.y = other.y; return this; } public Vector2 add(float x, float y) { this.x += x; this.y += y; return this; } public Vector2 add(Vector2 other) { this.x += other.x; this.y += other.y; return this; } public Vector2 sub(float x, float y) { this.x -= x; this.y -= y; return this; } public Vector2 sub(Vector2 other) { this.x -= other.x; this.y -= other.y; return this; } public Vector2 mul(float scalar) { this.x *= scalar; this.y *= scalar; return this; } public float len() { return FloatMath.sqrt(x * x + y * y); } public Vector2 nor() { float len = len(); if (len != 0) { this.x /= len; this.y /= len; } return this; } public float angle() { float angle = (float) Math.atan2(y, x) * TO_DEGREES; if (angle < 0) angle += 360; return angle; } public Vector2 rotate(float angle) { float rad = angle * TO_RADIANS; float cos = FloatMath.cos(rad); float sin = FloatMath.sin(rad); float newX = this.x * cos - this.y * sin; float newY = this.x * sin + this.y * cos; this.x = newX; this.y = newY; return this; } public float dist(Vector2 other) { float distX = this.x - other.x; float distY = this.y - other.y; return FloatMath.sqrt(distX * distX + distY * distY); } public float dist(float x, float y) { float distX = this.x - x; float distY = this.y - y; return FloatMath.sqrt(distX * distX + distY * distY); } public float distSquared(Vector2 other) { float distX = this.x - other.x; float distY = this.y - other.y; return distX * distX + distY * distY; } public float distSquared(float x, float y) { float distX = this.x - x; float distY = this.y - y; return distX * distX + distY * distY; } } PS : i am able to handle the touch events. I can rotate the triangle with the touch of my finger. But i want that ONE VERTEX of the triangle should point at my finger position respective of the position of my finger.

    Read the article

  • Managing game objects/components

    - by Xeon06
    Good day everyone, By far the biggest problem that has always dawned on my when programming games is how to structure my code. It just becomes an incredible mess after a while. The reason for that is because I have no idea how different classes should interact with each other. Let's have an example. Say I have a class Player, a class PlayerInput and a class Map. The player class contains information as to the location of the player, whereas the player input class handles changing that location, but by first making sure it's within a walkable area from the map class. How to structure this? My usual approach is to pass those components as parameters in the constructors of the parameters that need them, like so: var map = new Map(); var player = new Player(); var input = new PlayerInput(player, map); The problem with that is that it quickly gets messy, when you add new components you have to go through your constructors and update them, and it doesn't work well if you have mirroring references: var physics = new Physics(input); //Oops, doesn't work var input = new Input(physics); So, how do you guys usually manage this? Thanks.

    Read the article

< Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >