Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 616/1295 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • How to prevent overlapping of gunshot sounds when using fast-firing weapons

    - by G3tinmybelly
    So I am now trying to find sounds for my guns but when I grab a gun sound effect and play it in my game a lot of the sounds are either terrible sounding or have this horrible echoing effect because as a gun shoots sometimes the previous sound is playing still. public void shoot(float x, float y, float direction){ if(empty){ PlayHUD.message = "No more bullets!"; return; } if(reloading){ return; } if(System.currentTimeMillis() - lastShot < fireRate){ //AssetsLoader.lmgSound.stop(); return; } float dx = (float) (-13 * Math.cos(direction) + 75 * Math.sin(direction)); float dy = (float) (-14 * -Math.sin(direction) + 75 * Math.cos(direction)); float dx1 = (float) (-13 * Math.cos(direction) + 75 * Math.sin(direction)); float dy1 = (float) (-14 * -Math.sin(direction) + 75 * Math.cos(direction)); PlayState.effects.add(new MuzzleFlashEffect(x + dx1, y + dy1, (float) Math.toDegrees(-direction))); PlayState.projectiles.add(new Bullet(this, x + dx, y + dy, (float) (direction + (Math.toRadians(MathUtils.random(-accuracy, accuracy)))))); if(OptionState.soundOn){ AssetsLoader.lmgSound.play(OptionState.volume); } bulletsInClip--; lastShot = System.currentTimeMillis(); } Here is the code for where the sound plays. Every time this method is called the sound is called but it happens so often in this case that there is this terrible echoing. Any idea on how to fix this?

    Read the article

  • Unit turning in navmesh-based pathfinding

    - by Haddayn
    I'm working on an RTS game, and I'm using navmeshes for unit pathfinding. I do know how to find a general path within a navmesh, but how do you determine if the unit have enough space to turn? I have units of different shapes (mostly rectangles with different dimensions), and with different turn radii. Additionally some of units can turn in place, and some can move in reverse. So, how to find a path which unit can follow, considering that it can not rotate easily?

    Read the article

  • String on a model

    - by alecnash
    I am trying to put a sting on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3d button) it looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • Can I develop a game using C++ and deploy to XBOX 360?

    - by Murphy
    I'm a C# developer and an enthusiast of XNA, but I'm really disappointed with the game engines available for XNA. I was using Torque X, which is really good, but GarageGames no longer supports Torque X for XNA 4.1. I searched for other engines, but only the sunburn was worth it and would have to pay - I already spent money with Torque. Based on this, I'm thinking about starting to develop in C++. Can I develop with some C++ engines and deploy to XBox 360?

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • Pygame - CollideRect - But How Do They Collide?

    - by Chakotay
    I'm having some trouble figuring out how I can handle collisions that are specifically colliding on the top or bottom a rect. How can specify those collisions? Here's a little code snippet to give you an idea of my approach. As it is now it doesn't matter where it collides. # the ball has hit one of the paddles. send it back in another direction. if paddleRect.colliderect(ballRect) or paddle2Rect.colliderect(ballRect): ballHitPaddle.play() if directionOfBall == 'upleft': directionOfBall = 'upright' elif directionOfBall == 'upright': directionOfBall = 'upleft' elif directionOfBall == 'downleft': directionOfBall = 'downright' elif directionOfBall == 'downright': directionOfBall = 'downleft' Thanks in advance. **EDIT** Paddle rect: top ____ | | | | | | Sides | | ---- bottom I need to know if the ball has hit either the top or the bottom.

    Read the article

  • The input doesn't recognize that I release the key?

    - by joapet99
    I'm creating a window (JOptionPane), in response to a collision. However, if the player is holding a key down when the window pops up, the input doesn't trigger a key release when the key is released. I don't think you can just check it with a isRelease function in the input, since the input is kind of corrupt. Can you help me? The way I check if the key is down: if(input.isKeyDown(Input.KEY_A)&& TestLevel.isFighting == false){ if(owner.canMoveLeft){ position.x -= speed * delta; } } I am not handling the key release by myself, but if I check if the key is down it should work. But it doesn't.

    Read the article

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • C# XNA render an entire frame to a texture2d

    - by redcodefinal
    I asked a question here: C# XNA Make rendered screen a texture2d But, I ended up not getting the exact result I was looking for since I didn't ask the question right. In a game I am writing, I render an extremely large city out of objects, this can cause lag when moving the camera to view things that are off screen. I need a way to render then ENTIRE city, even the stuff that is off screen, and make it into a Texture2D. The answer I chose for the last one didn't work entirely right because it only gets what is on screen, not what is off.

    Read the article

  • Bitmap rotation jitter around pivot

    - by Manderin87
    I am working on a asteriods clone and I have the ship graphic loaded as a 96x96 bitmap. When the player rotates the ship I rotate the bitmap by degree (float). rotation function: if(m_Matrix == null) { m_Matrix = new Matrix(); } else { m_Matrix.reset(); } m_Matrix.setRotate(degree, m_BaseImage.getWidth() / 2, m_BaseImage.getHeight() / 2); m_RotatedImage = Bitmap.createBitmap(m_BaseImage, 0, 0, m_BaseImage.getWidth(), m_BaseImage.getHeight(), m_Matrix, true); draw function: m_Paint.setAntiAlias(true); m_Paint.setFilterBitmap(true); m_Paint.setDither(true); canvas.drawBitmap(m_RotatedImage, (int) posX - m_RotatedImage.getWidth() / 2, (int) posY - m_RotatedImage.getHeight() / 2, m_Paint); When the bitmap is drawn, the bitmap jitters slightly around the pivot. Can anyone fix or tell me why the bitmap is jittering around the pivot? It needs to be smooth.

    Read the article

  • Best way to create neon glow line effect in AS3?

    - by Phil
    What's the best way to create a neon glow line effect in Flash AS3? Say similar to what's going on in the game gravitron360 on the xbox 360? Would it be a good idea to create movieclips with plain lines drawn in them and then apply a glow filter to them? or perhaps just apply the glow filter to the entire movieclip layer the movieclips are on? or just draw them manually and create a glow effect by converting the lines to fills and then softening edges? (wouldn't blend as well but would be the fastest CPU wise?) Thanks for any help

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Can a local multiplayer iOS game display differently for each device?

    - by Rahil627
    I've seen games which display different data for two devices, but not more than two. If possible, can it be accomplished using GameKit? EDIT: More specifically, I was thinking local multiplayer via bluetooth or wi-fi on an iOS device. Most games I've seen display the same screen synchronized across all of the devices. I understand games that network across the internet do this, often using a server, but I haven't seen any examples of a 3+ device local multiplayer iOS game. I just want to make sure it wasn't some kind of limitation.

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • Data structures for a 3D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

  • java 2d game how to make a player jump

    - by user2957632
    Hi I'm making a 2d plat former running game kind of like jetpack joyride fro example were u are constantly running but there is obstacles and u need to jump over them. but I want the character to jump and come back down. here is some of my code. if (listen.ml == true) { if (y > 120) { jump = true; y -= 1; } } if (y == 121 && jump == true) { jump = false; y += 1; }

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • Using Bullet physics engine to find the moment of object contact before penetration

    - by MooMoo
    I would like to use Bullet Physics engine to simulate the objects in 3D world. One of the objects in the world will move using the position from 3D mouse control. I will call it "Mouse Object" and any object in the world as "Object A" I define the time before "mouse object" and "Object A" collide as t-1 The time "mouse object" penetrate "Object A" as t Now there is a problem about rendering the scene because when I move the mouse very fast, "Mouse object" will reside in "Object A" before "Object A" start to move. I would like the "Mouse Object" to stop right away attach to the "Object A". Also If the "Object A" move, the "Mouse object" should move following (attach) the "Object A" without stop at the first collision take place. This is what i did I find the position of the "Mouse Object" at time t-1 and time t. I will name it as pos(t-1) and pos(t) The contact time will be sometime between t-1 to t, which the time of contact I name it as t_contact, therefore the contact position (without penetration) between "Mouse object" and "Object A" will be pos(t_contact) then I create multiple "Mouse object"s using this equation pos[n] = pos(t-1) * C * ( pos(t) - pos(t-1) ) where 0 <= C <= 1 if I choose C = 0.1, 0.2, 0.3,0.4..... 1.0, I will get pos[n] for 10 values Then I test collision for all of these 10 "Mouse Objects" and choose the one that seperate between "no collision" and "collision". I feel this method is super non-efficient. I am not sure the way other people find the time-of-contact or the position-of-contact when "Object A" can move.

    Read the article

  • Drawing map tiles for iPhone game

    - by user17778
    I'm working on a turn-based strategy game for the iPhone that has a hexagon-grid based map in it. I'm in the process of drawing up the actual tiles for the different landscapes (i.e. forest, grassland, etc.) and was wondering what program to draw the tile images in. I would assume Adobe Illustrator since a vector-based image may allow for smooth images even when the user is zoomed in really close. Is this right? Thanks!

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >