Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 512/1292 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Math > Logic for a Logarithmic Score Meter

    - by oodavid
    I'm trying to implement a score meter whereby I specify a maximum value (say 15,000) and I can render values on it in a logarithmic manner ie: +------+---+--+-++ +------+---+--+-++ |== | |====== | +------+---+--+-++ +------+---+--+-++ 200 pts 1,000 pts +------+---+--+-++ +------+---+--+-++ |============= | |================| +------+---+--+-++ +------+---+--+-++ 5,000 pts 15,000 pts + The upper bound needs to be variable, and need to be able to convert a score to a percentage, using the above mockup as an example: score2pct(15000, 200) = 0.2 score2pct(15000, 1000) = 0.4 score2pct(15000, 5000) = 0.8 score2pct(15000, 15000) = 1 Does anyone have any pointers for me?

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • Best way to detect if vec3 is between vec3(x) and vec3(y) in glsl

    - by elect
    As titled I am sampling from a texture and if the color is somehow gray [vec3(.8), vec3(.9)] and an uniform is 1 I need to substitute that color with another one I am not a glsl veteran but I am pretty sure there is a more elegant and compact (without mentioning faster) way than this: vec3 textureColor = texture(texture0, oUV); if(settings.w == 1 && textureColor.r > .8 && textureColor.r < .9 && textureColor.g > .8 && textureColor.g < .9 && textureColor.b > .8 && textureColor.b < .9)

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a 'dark' area?

    - by Athix
    In almost every game I've come across that includes a dark situation designed to change the way a user interacts with the environment, there are always some players who turn up their monitor's gamma correction in order to negate the desired effect. Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a challenge? (the darkness) I'd imagine if you could reliably retrieve the current gamma correction of the user's monitor, you could use that to more or less prevent the advantage it would otherwise grant without causing the normal users any inconvenience.

    Read the article

  • Playing part of a sfx audio file in HTML5 using WebAudio

    - by Matthew James Davis
    I have compiled all of my sound effects into one sequenced .ogg file. I have the start and stop times for each sound effect. How do I play the individual effects? That is, how do I play part of an audio file. More specificially, I've created a dictionary { 'sword_hit': { src: 'sfx.ogg', start: 265, // ms length: 212 // ms } } that my play_sound() function can use to look up 'sword_hit' and play the correct audio file at the correct start time for the correct duration. I simply need to know how to tell the WebAudio API to start playing at start ms and only play for length ms.

    Read the article

  • Best way to do large XNA animations?

    - by Harold
    What's the best way to have large animations in XNA 4.0? I have created a spritesheet with the sprite being 250x400 (more of an image than a sprite but hey ho) and there are approximately 45 frames in the animation. This causes problems for XNA as it says that the maximum filesize for Reach is 2048. I'd rather not change to hidef as I heard that means that your game is less compatible with some computers and systems so does anyone have any idea what the best thing I could do is? The only thing I could come up with is to have a list of textures to flick through but that's not ideal.

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Proper way to use a RenderTarget2D to draw multiple textures?

    - by TheBroodian
    In the process of trying to resolve a split screen issue, I've been trying to use a RenderTarget2D to draw a portion of my scene to a Texture2D, and then again to another Texture2D, but the end result of both Texture2D's is coming out the same. Can anybody tell me what I'm doing wrong? Texture2D camera1Render; Texture2D camera2Render; GraphicsDevice.SetRenderTarget(RenderTarget); GraphicsDevice.Clear(Color.Transparent); map.Draw(mapDisplayDevice, Camera1, new Location(0, 0), false); camera1Render = RenderTarget; GraphicsDevice.Clear(Color.Transparent); map.Draw(mapDisplayDevice, Camera2, new Location(0, 0), false); camera2Render = RenderTarget; SetRenderTarget(null);

    Read the article

  • Having troubles with LibNoise.XNA and generating tileable maps

    - by Jon
    Following up on my previous post, I found a wonderful port of LibNoise for XNA. I've been working with it for about 8 hours straight and I'm tearing my hair out - I just can not get maps to tile, I can't figure out how to do this. Here's my attempt: Perlin perlin = new Perlin(1.2, 1.95, 0.56, 12, 2353, QualityMode.Medium); RiggedMultifractal rigged = new RiggedMultifractal(); Add add = new Add(perlin, rigged); // Initialize the noise map int mapSize = 64; this.m_noiseMap = new Noise2D(mapSize, perlin); //this.m_noiseMap.GeneratePlanar(0, 1, -1, 1); // Generate the textures this.m_noiseMap.GeneratePlanar(-1,1,-1,1); this.m_textures[0] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(mapSize, mapSize * 2, mapSize, mapSize * 2); this.m_textures[1] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(-1, 1, -1, 1); this.m_textures[2] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); The first and third ones generate fine, they create a perlin noise map - however the middle one, which I wanted to be a continuation of the first (As per my original post), is just a bunch of static. How exactly do I get this to generate maps that connect to each other, by entering in the mapsize * tile, using the same seed, settings, etc.?

    Read the article

  • Game programming basics under Windows

    - by dreta
    I've been trying to learn some Windows programming using the Win32 API. Now, i'm used to working with the OS layer being abstracted away, mostly thanks to libraries like SFML or Allegro. Could you guys help me out and tell me if i'm thinking right here. The place for my gameloop is where i'm reading the messages? while (TRUE) { if (PeekMessage (&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break ; TranslateMessage (&msg) ; DispatchMessage (&msg) ; } else { //my game loop goes here } } Now the slightly bigger issue, that is, drawing. Do i run my drawing where i normaly do it, inside the game loop after the game logic? Or do i do it when WM_PAIN is being called and just call InvalidateRect (hwnd, NULL, TRUE); when i want to draw? This does feel weird, the WM_PAINT is a queued message, so i don't know for sure when it'll be called. So if i wanted to avoid this, do i just get the device handle inside the game loop and only ValidateRect (hwnd, NULL); in the WM_PAINT case (beside the ValidateRect (hwnd, NULL); called after drawing in the game loop)? Actually, now that i think about it, do i even need WM_PAINT in this situation or can i skip it and let DefWindowProc handle it (does it validate the screen if WM_PAINT isn't processed)? If this is any important, i'm setting up my code for OpenGL.

    Read the article

  • Precise Touch Screen Dragging Issue: Trouble Aligning with the Finger due to Different Screen Resolution

    - by David Dimalanta
    Please, I need your help. I'm trying to make a game that will drag-n-drop a sprite/image while my finger follows precisely with the image without being offset. When I'm trying on a 900x1280 (in X [900] and Y [1280]) screen resolution of the Google Nexus 7 tablet, it follows precisely. However, if I try testing on a phone smaller than 900x1280, my finger and the image won't aligned properly and correctly except it still dragging. This is the code I used for making a sprite dragging with my finger under touchDragged(): x = ((screenX + Gdx.input.getX())/2) - (fruit.width/2); y = ((camera_2.viewportHeight * multiplier) - ((screenY + Gdx.input.getY())/2) - (fruit.width/2)); This code above will make the finger and the image/sprite stays together in place while dragging but only works on 900x1280. You'll be wondering there's camera_2.viewportHeight in my code. Here are for two reasons: to prevent inverted drag (e.g. when you swipe with your finger downwards, the sprite moves upward instead) and baseline for reading coordinate...I think. Now when I'm adding another orthographic camera named camera_1 and changing its setting, I recently used it for adjusting the falling object by meter per pixel. Also, it seems effective independently for smartphones that has smaller resolution and this is what I used here: show() camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 280; // --> I set it to a smaller view port height so that the object would fall faster, decreasing the chance of drag force. camera_1.viewportWidth = 196; // --> Make it proportion to the original screen view size as possible. camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); touchDragged() x = ((screenX + (camera_1.viewportWidth/Gdx.input.getX()))/2) - (fruit.width/2); y = ((camera_1.viewportHeight * multiplier) - ((screenY + (camera_1.viewportHeight/Gdx.input.getY()))/2) - (fruit.width/2)); But the result instead of just following the image/sprite closely to my finger, it still has a space/gap between the sprite/image and the finger. It is possibly dependent on coordinates based on the screen resolution. I'm trying to drag the blueberry sprite with my finger. My expectation did not met since I want my finger and the sprite/image (blueberry) to stay close together while dragging until I release it. Here's what it looks like: I got to figure it out how to make independent on all screen sizes by just following the image/sprite closely to my finger while dragging even on most different screen sizes instead.

    Read the article

  • Markup format or script for data files?

    - by Aaron
    The game I'm designing will be mainly written in a high level scripting language (leaning towards either Lua or Squirrel) with a C++ core. In addition to scripts I'm also going to need different data files. Many data files will be for static information such as graphical assets and monster types. I'd also want to create and update data files at runtime for user information like option settings and game saves. Can I get away with using plain script files (i.e. .lua or .nut files) for my data files, or is it better to use dedicated markup formats like XML or YAML? If I use script files, loaded separately from my true scripts, then I wouldn't need an extra library to read those files. Scripting languages like Lua also have table syntax that lend themselves towards data definition. On the other hand I'd have to write my own schema check code. These languages also don't seem to support serialization "out of the box" like the markup format libraries do.

    Read the article

  • HedgeWar code confusion

    - by BluFire
    I looked at an open source project(HedgeWars) that was built using many programming languages such as C++ and Java. While I was looking through the code, I couldn't help noticing that all the math and physics were gone from the Java code. HedgeWars I imported the project file called "SDL-android-project" which was a sub folder to "android build" and project files. My question is where is all the math and physics inside the code? Do I have to look at the C++ code in order to see it? I think Hedgewars was originally programmed in C++ but the files are confusing be because of its size and the fact that it has several programming languages inside.

    Read the article

  • Register Game Object Components in Game Subsystems? (Component-based Game Object design)

    - by topright
    I'm creating a component-based game object system. Some tips: GameObject is simply a list of Components. There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains pointers to some of Components. GameSubsystem is a very powerful and flexible abstraction: it represents any slice (or aspect) of the game world. There is a need in a mechanism of registering Components in GameSubsystems (when GameObject is created and composed). There are 4 approaches: 1: Chain of responsibility pattern. Every Component is offered to every GameSubsystem. GameSubsystem makes a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components. pro. Components know nothing about how they are used. Low coupling. A. We can add new GameSubsystem. For example, let's add GameSubsystemTitles that registers all ComponentTitle and guarantees that every title is unique and provides interface to quering objects by title. Of course, ComponentTitle should not be rewrited or inherited in this case. B. We can reorganize existing GameSubsystems. For example, GameSubsystemAudio, GameSubsystemRender, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms). con. Every-to-every check. Very innefficient. con. Subsystems know about Components. 2: Each Subsystem searches for Components of specific types. pro. Better performance than in Approach 1. con. Subsystems still know about Components. 3: Component registers itself in GameSubsystem(s). We know at compile-time that there is a GameSubsystemRenderer, so let's ComponentImageRender will call something like GameSubsystemRenderer::register(ComponentRenderBase*). pro. Performance. No unnecessary checks as in Approach 1. con. Components are badly coupled with GameSubsystems. 4: Mediator pattern. GameState (that contains GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch. Questions: Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about implementation of Approach 4? Thank you.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Problems with moving 2D circle/box collision detection

    - by dario3004
    This is my first game ever and I'm a newbie in computer physics. I've got this code for the collision detection and it works fine for BOTTOM and TOP collision.It miss the collision detection with the paddle's edge and angles so I've (roughly) tried to implement it. Main method that is called for bouncing, it checks if it bounce with wall, or with top (+ right/left side) or with bottom (+ right/left side): protected void handleBounces(float px, float py) { handleWallBounce(px, py); if(mBall.y < getHeight()/4){ if (handleRedFastBounce(mRed, px, py)) return; if (handleRightSideBounce(mRed,px,py)) return; if (handleLeftSideBounce(mRed,px,py)) return; } if(mBall.y > getHeight()/4 * 3){ if (handleBlueFastBounce(mBlue, px, py)) return; if (handleRightSideBounce(mBlue,px,py)) return; if (handleLeftSideBounce(mBlue,px,py)) return; } } This is the code for the BOTTOM bounce: protected boolean handleRedFastBounce(Paddle paddle, float px, float py) { if (mBall.goingUp() == false) return false; // next position tx = mBall.x; ty = mBall.y - mBall.getRadius(); // actual position ptx = px; pty = py - mBall.getRadius(); dyp = ty - paddle.getBottom(); xc = tx + (tx - ptx) * dyp / (ty - pty); if ((ty < paddle.getBottom() && pty > paddle.getBottom() && xc > paddle.getLeft() && xc < paddle.getRight())) { mBall.x = xc; mBall.y = paddle.getBottom() + mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } else return false; } As long as I understood it should be something like this: So I tried to make the "left side" and "right side" bounce method: protected boolean handleLeftSideBounce(Paddle paddle, float px, float py){ // next position tx = mBall.x + mBall.getRadius(); ty = mBall.y; // actual position ptx = px + mBall.getRadius(); pty = py; dyp = tx - paddle.getLeft(); yc = ty + (pty - ty) * dyp / (ptx - tx); if (ptx < paddle.getLeft() && tx > paddle.getLeft()){ System.out.println("left side bounce1"); System.out.println("yc: " + yc + "top: " + paddle.getTop() + " bottom: " + paddle.getBottom()); if (yc > paddle.getTop() && yc < paddle.getBottom()){ System.out.println("left side bounce2"); mBall.y = yc; mBall.x = paddle.getLeft() - mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } } return false; } I think I'm quite near to the solution but I'm having big troubles with the new "yc" formula. I tried so many versions of it but since I don't know the theory behind it I can't adjust for the Y axis. Since the Y axis is inverted I even tried this: yc = ty - (pty - ty) * dyp / (ptx - tx); I tried Googling it but I can't seem to find a solution for it. Also this method fails when ball touches the angle and I don't think is a nice way because it just test "one" point of the ball and probably there will be many cases in which the ball won't bounce.

    Read the article

  • Basic procedural generated content works, but how could I do the same in reverse?

    - by andrew
    My 2D world is made up of blocks. At the moment, I create a block and assign it a number between 1 and 4. The number assigned to the nth block is always the same (i.e if the player walks backwards or restarts the game.) and is generated in the function below. As shown here by this animation, the colours represent the number. function generate_data(n) math.randomseed(n) -- resets the random so that the 'random' number for n is always the same math.random() -- fixes lua random bug local no = math.random(4) --print(no, n) return no end Now I want to limit the next block's number - a block of 1 will always have a block 2 after it, while block 2 will either have a block 1,2 or 3 after it, etc. Before, all the blocks data was randomly generated, initially, and then saved. This data was then loaded and used instead of being randomly called. While working this way, I could specify what the next block would be easily and it would be saved for consistency. I have now removed this saving/loading in favour of procedural generation as I realised that save whiles would get very big after travelling. Back to the present. While travelling forward (to the right), it is easy to limit what the next blocks number will be. I can generate it at the same time as the other data. The problem is when travelling backwards (to the left) I can not think of a way to load the previous block so that it is always the same. Does anyone have any ideas on how I could sort this out?

    Read the article

  • Bouncing ball slowing down over time

    - by user46610
    I use the unreal engine 4 to bounce a ball off of walls in a 2D space, but over time the ball gets slower and slower. Movement happens in the tick function of the ball FVector location = GetActorLocation(); location.X += this->Velocity.X * DeltaSeconds; location.Y += this->Velocity.Y * DeltaSeconds; SetActorLocation(location, true); When a wall gets hit I get a Hit Event with the normal of the collision. This is how I calculate the new velocity of the ball: FVector2D V = this->Velocity; FVector2D N = FVector2D(HitNormal.X, HitNormal.Y); FVector2D newVelocity = -2 * (V.X * N.X + V.Y * N.Y) * N + V; this->Velocity = newVelocity; Over time, the more the ball bounced around, the velocity gets smaller and smaller. How do I prevent speed loss when bouncing off walls like that? It's supposed to be a perfect bounce without friction or anything.

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • Determine corners of a specific plane in the frustum

    - by Takumi
    I'm working on a game with a 2D view in a 3D world. It's a kind of shoot'em up. I've a spaceship at the center of the screen and i want that ennemies appear at the borders of my window. Now i don't know how to determine positions of the borders of the window. For example, my camera is at (0,0,0) and looking forward (0,0,1). I set my spaceship at (0,0,50). I also know the near plane (1) and the far plane(1000). I think i'd have to find the 4 corners of the plane in the frustum whose z position is 50, and with these corner i can determine borders. But i don't know how to determine x and y.

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Best way to implement mouse-based movement in MMOG

    - by fiftyeight
    I want to design an MMO where players click the destination they want to walk to with their mouse and the character moves there, similar to Runescape in this manner. I think it should be easier than keyboard movement since the client can simply send the server the destination each time the player clicks on a destination. The main thing I'm trying to decide is what to do when there are obstacles in the way. It's no problem to implement a simple path-finding solution on the client, the question is if the server will do path-finding as well, since it'll probably take too much Computation power from the server. What I though is that when there is an obstacle the client will send only the first coordinate it plans to go to and then when he gets there he'll send the next coordinate automatically. For example if there is a rock in the way the character will decide on a route that is made of two destinations so it goes around the rock and when it arrives at the first destination it sends the next coordinate. That way if the player changes destination is the middle he won't send unnecessary information. Is this a good way to implement it and is there a standard way MMOGs usually do it? EDIT: I should also mention that the server will make sure all movements are legal and there aren't any walls in the way etc. In the way I wrote it should be quite easy since all movements will be sent in straight lines so the server will just check there aren't any obstacles along that line.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >