Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 449/1016 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Keeping rotation between two objects

    - by user99
    In my XNA game I have two objects that collide. When the first object collides with the other it is able to latch on to it and move it about the world. I am having a problem with the math here (Math isn't my strong point). I currently have the second object latch on to the first and move around with it, but I cannot get it to keep it's original direction. So, if the object is facing up it should keep this direction relative to how it is being rotated with the original item. Any tips on how I could best to achieve this?

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • Drawing order in XNA

    - by marc wellman
    When manually setting the drawing order of game components by setting int DrawableGameComponent.DrawOrder can one use any integer numbers as long an order is defined like component1 = drawing order: 2 component2 = drawing order: 5 component3 = drawing order: 10 component4 = drawing order: 323 or do these integers have to be consecutive and starting with zero like component1 = drawing order: 0 component2 = drawing order: 1 component3 = drawing order: 2 component4 = drawing order: 3 ?

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • High CPU usage on Pong clone

    - by max
    I just made my first game, a clone of Pong, using OpenGL and C++. But its using ~50% of the CPU, which I guess is very high for a game like this. How can I improve that? Can you please look up my code and tell me what all things I am doing wrong? Any feedback is welcome. http://pastebin.com/L5zE3axh Also it would be extremely helpful if you give some general points on how to develop games in OpenGL efficiently.. Thanks in advance!

    Read the article

  • Unity iOS optimization and draw calls

    - by vzm
    I am curious of what methods I should approach in optimizing my Unity project for iOS hardware. I have very little image effects running (directional light with low res shadows) and I used the combine children script from the standard assets to lessen the load on the CPU. My project currently runs with 45-57 draw calls at non-intensive segments and up to 178 at intensive segments. I heard that static batching relieves some of the stress, but the game has the environment moving around the player instead of the player moving around the environment. Is there any alternative that I may look towards to improving the draw call number?

    Read the article

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

  • What 2D game engines are there available for C++?

    - by dysoco
    I just realized there are not C++ 2D Game Engines that I know of. For example, something like Pygame in Python, or Slick2D in Java. We have the following: SDL - Too low level, not a Game Engine SFML - Handles more things than SDL and it's more modern, but still not a Game Engine. I like it, but I have found it a little bit buggy with the 2.0 version. Irrlitch - It's a Game Engine, but 3D focused. Ogre3D - Same as Irrlitch Allegro - This is a Game Engine, but it's C based, I'd like a modern C++ library. Monocle Engine - This looks like what I need... but sadly there is no Documentation, no community... nothing, all I have is the Github repo. So, do you know any ? I'd like to use C++, not C#, not Java: I'm just more comfortable with C++.

    Read the article

  • Super-quick MIDI generator with nonrestrictive license?

    - by Ricket
    I'm working on my Ludum Dare entry and trying to figure out how in the world I'm ever going to get background music. I found WolframTones, but the license is too restrictive: Unless otherwise specified, this Site and content presented on this Site are for your personal and noncommercial use. You may not modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell any information or content obtained from this Site. For commercial and other uses, contact us. But I really like the interface! It's a lot like sfxr - click a genre and download a song. That's so cool. Is there another program that does this same sort of thing but without a restrictive license, so that I can generate a bgm and use it in my game?

    Read the article

  • Are there existing FOSS component-based frameworks?

    - by Tesserex
    The component based game programming paradigm is becoming much more popular. I was wondering, are there any projects out there that offer a reusable component framework? In any language, I guess I don't care about that. It's not for my own project, I'm just curious. Specifically I mean are there projects that include a base Entity class, a base Component class, and maybe some standard components? It would then be much easier starting a game if you didn't want to reinvent the wheel, or maybe you want a GraphicsComponent that does sprites with Direct3D, but you figure it's already been done a dozen times. A quick Googling turns up Rusher. Has anyone heard of this / does anyone use it? If there are no popular ones, then why not? Is it too difficult to make something like this reusable, and they need heavy customization? In my own implementation I found a lot of boilerplate that could be shoved into a framework.

    Read the article

  • Issues with shooting in a HTML5 platformer game

    - by fnx
    I'm coding a 2D sidescroller using only JavaScript and HTML5 canvas, and in my game I have two problems with shooting: 1) Player shoots continous stream of bullets. I want that player can shoot only a single bullet even though the shoot-button is being held down. 2) Also, I get an error "Uncaught TypeError: Cannot call method 'draw' of undefined" when all the bullets are removed. My shooting code goes like this: When player shoots, I do game.bullets.push(new Bullet(this, this.scale)); and after that: function Bullet(source, dir) { this.id = "bullet"; this.width = 10; this.height = 3; this.dir = dir; if (this.dir == 1) { this.x = source.x + source.width - 5; this.y = source.y + 16; } if (this.dir == -1) { this.x = source.x; this.y = source.y + 16; } } Bullet.prototype.update = function() { if (this.dir == 1) this.x += 8; if (this.dir == -1) this.x -= 8; for (var i in game.enemies) { checkCollisions(this, game.enemies[i]); } // Check if bullet leaves the viewport if (this.x < game.viewX * 32 || this.x > (game.viewX + game.tilesX) * 32) { removeFromList(game.bullets, this); } } Bullet.prototype.draw = function() { // bullet flipping uses orientation of the player var posX = game.player.scale == 1 ? this.x : (this.x + this.width) * -1; game.ctx.scale(game.player.scale, 1); game.ctx.drawImage(gameData.getGfx("bullet"), posX, this.y); } I handle removing with this function: function removeFromList(list, object) { for (i in list) { if (object == list[i]) { list.splice(i, 1); break; } } } And finally, in the main game loop I have this: for (var i in game.bullets) { game.bullets[i].update(); game.bullets[i].draw(); } I have tried adding if (game.bullets.length > 0) to the main game loop before the above draw&update calls, but I still get the same error.

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • OpenGL ES 2.0: Mixing 2D with 3D

    - by Bunkai.Satori
    Is it possible to mix 2D and 3D graphics in a single OpenGL ES 2.0 game, please? I have plenty of 2D graphics in my game. The 2D graphics is represented by two triangular polygons (making up a rectangle) with texture on them. I use orthographic matrix to render the whole scene. However, I need to add some 3D effects into my game. Threfore, I wish to use perspective camera to render the meshes. Is it possible to mix orthographic and perspective camera in one scene? If yes, is there going to be a large performance cost for this? Is there any recommended approach to do this effectively? I wil have 90% of 2D graphics and only 10% of 3D. Target platform is OpenGL ES 2.0 (iOS, Android). I use C++ to develop. Thank you.

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • How to mimic the same fixed-size horizon as in this racing game?

    - by Aybe
    I am trying to replicate the same horizon (buildings and sky) as in the image below: As you can see, the player has advanced in a straight line, yet the horizon has still the same size: This is my attempt using 3D, while it's okay when the player is on the start line: It's not so great when the player advanced as much as in the image no. 2: This is an overview of where the horizon buildings and sky are located: Obviously this won't achieve such effect when one is close to it, so I've tried to scale up the horizon on all axes but the problem is that the buildings are too small depending where you look at them from. How can one mimic such rendering ?

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >