Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 449/1021 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Need a bounding box for CCSprite that includes all children/subchildren

    - by prototypical
    I have a CCSprite that has CCSprite children, and those CCSprite children have CCSprite children. The contentSize property doesn't seem to include all children/subchildren, and seems to only work for the base node. I could write a recursive method to traverse a CCSprite for all children/subchildren and calculate a proper boundingbox, but am curious as to if I am missing something and it's possible to get that information without doing so. I'l be a little surprised if such a method doesn't exist, but I can't seem to find it.

    Read the article

  • Best way to load rigid bodies from file

    - by Mel
    I'm trying to switch to bullet for physics simulation. Lemme just say first that I am so pleased with bullet's accuracy and performance. After messing around it for a bit, I'm now trying to load rigid bodies from files. Most of my models are in blender and with some searching, I was able to export them in .bullet format. However, loading the files into bullet doesn't look like an easy task. I've come across this page that points me to a sample application that loads bullet files. But then it goes and says that this loader is just a starting point. Is there any open source library out there that will allow me to load rigid bodies from a file? I don't really wanna spend that much time trying to create my own loader.

    Read the article

  • What are cons of usage only non-member functions and POD?

    - by Miro
    I'm creating my own game engine. I've read these articles and this question about DOD and there was written to not use member functions and classes. I also heard some criticism to this idea. I can write it using member functions or non-member functions it would be similar. So what are benefits/cons of that approach or when project grows, does any of these approaches give clearer and better manageable code? With POD & non-member functions I don't have to make struct members public I can still use object id outside of engine like OpenGL does with all it's stuff, so It's not about encapsulation. POD - plain old data DOD - data oriented design

    Read the article

  • Alternatives to multiple sprite batches for achieving 2D particle system depth

    - by Ergwun
    In my 2D XNA game, I render all my sprites with a single sprite batch using SpriteSortMode.BackToFront and BlendState.AlphaBlend. I'm adding a particle system based on the App Hub particles sample. Since this uses SpriteSortMode.Deferred and BlendState.Additive, I will need to have two SpriteBatch.Begin / SpriteBatch.End pairs: one for 'regular' sprites, and one for particles. In my top-down shooter, If I want to have explosions appear under planes, but above the ground, then I believe I will have to have three Begin/End pairs, first to draw everything under the explosions, then to draw the explosions, then to draw everything above the explosions. If I want to have particle effects at multiple different depths, then I'm going to need even more Begin/Endpairs. This is all easy to code, but I'm wondering if there is an alternative way to handle this?

    Read the article

  • OpenGL ES, orthopgraphics projection and viewport

    - by DarkDeny
    I want to make some simple 2D game on iOS to familiarize myself with OpenGL ES. I started with Ray Wenderlich tutorial (How To Create A Simple 2D iPhone Game with OpenGL ES 2.0 and GLKit). That tutorial is quite good, but I miss some parts of a puzzle. Ray creates orthographic projection using some magic numbers like 480 and 320. It is not clear to me why did he take these numbers, and as far as I can see - sprite is not mapped to the ipad simulator screen one-to-one pixel. I tried to play with parameters with which ortho matrix is created, but I cannot figure out what math is here. How can I calculate numbers (bottom, top, left, right, close, far) which will be parameters to orthographic projection matrix creation and have sprite on the screen shown in its original size?

    Read the article

  • Unity3D - Android pause screen - double click issue

    - by user3666251
    I made a pause script for the game im developing for android. I added the script to the GUITexture I created and placed on the top right side of the screen.The issue stands at the part where if the player clicks the pause button then clicks resume then he wants to pause the game again.When he clicks pause the second time the buttons dont show up unless he clicks again. This is the script : #pragma strict var paused = false; var isButtonVisible : boolean = true; function OnMouseDown(){ this.paused = !this.paused; Time.timeScale = 0; isButtonVisible = true; } function OnGUI(){ if ( isButtonVisible ) { if(this.paused){ if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+3,200,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); Time.timeScale = 1; isButtonVisible = false; } if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2-50,200,50),"Resume")){ Time.timeScale = 1; isButtonVisible = false; } // Insert the rest of the pause menu logic if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+56,200,50),"Main Menu")){ Application.LoadLevel ("MainMenu"); isButtonVisible = false; Time.timeScale = 1; } } } } Thank you.

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • One True Event Loop

    - by CyberShadow
    Simple programs that collect data from only one system need only one event loop. For example, Windows applications have the message loop, POSIX network programs usually have a select/epoll/etc. loop at their core, pure SDL games use SDL's event loop. But what if you need to collect events from several subsystems? Such as an SDL game which doesn't use SDL_net for networking. I can think of several solutions: Polling (ugh) Put each event loop in its own thread, and: Send messages to the main thread, which collects and processes the events, or Place the event-processing code of each thread in a critical section, so that the threads can wait for events asynchronously but process them synchronously Choose one subsystem for the main event loop, and pass events from other subsystems via that subsystem as custom messages (for example, the Windows message loop and custom messages, or a socket select() loop and passing events via a loopback connection). Option 2.1 is more interesting on platforms where message-passing is a well-developed threading primitive (e.g. in the D programming language), but 2.2 looks like the best option to me.

    Read the article

  • how difficult to add vibration/feedback to a open source driving game

    - by Jonathan Day
    Hi, I'm looking to use SuperTuxKart as a basis for a PhD research project. A key requirement for the game is to provide vibration feedback through the controller (obviously dependant on the controller itself). I don't believe that the game currently includes this feature and I'm trying to get a feel for how big a challenge it would be to add. My background is as a J2EE and PHP developer/architect, so I don't know C++ as such, but am prepared to give it a crack if there are resources and guides to assist, and it's not a herculean task. Alternatively, if you know of any open source games that do include vibration feedback, please feel free to let me know! Preferably the game would be of the style that the player had to navigate a character (or character's vehicle) over a repeatable course/map. TIA, JD

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • Particle effect after the bullet

    - by Siddharth
    In my game, I fire a bullet from the gun along with that I generate a particle behind the bullet so that I look like fire effect after the bullet. But my problem is that the position I got from the bullet was distance in place. So basically I want to say that the bullet speed was high for that reason I got coordinate for the particle generation was far from each other like dot dot effect. But I want continuous flow of particle behind the bullet. So please provide any guidance for my problem

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • (Abstract) Game engine design

    - by lukeluke
    I am writing a simple 2D game (for mobile platforms) for the first time. From an abstract point of view, i have the main player controlled by the human, the enemies, elments that will interact with the main player, other living elements that will be controlled by a simple AI (both enemies and non-enemies). The human player will be totally controlled by the player, the other actors will be controlled by AI. So i have a class CActor and a class CActorLogic to start with. I would define a CActor subclass CHero (the main player controlled with some input device). This class will probably implement some type of listener, in order to capture input events. The other players controlled by the AI will be probably a specific subclass of CActor (a subclass per-type, obviously). This seems to be reasonable. The CActor class should have a reference to a method of CActorLogic, that we will call something like CActorLogic::Advance() or similar. Actors should have a visual representation. I would introduce a CActorRepresentation class, with a method like Render() that will draw the actor (that is, the right frame of the right animation). Where to change the animation? Well, the actor logic method Advance() should take care of checking collisions and other things. I would like to discuss the design of a game engine (actors, entities, objects, messages, input handling, visualization of object states (that is, rendering, sound output and so on)) but not from a low level point of view, but from an high level point of view, like i have described above. My question is: is there any book/on line resource that will help me organize things (using an object oriented approach)? Thanks

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • What makes games responsive to user input?

    - by zaftcoAgeiha
    Many games have been praised for its responsive gameplay, where each user action input correspond to a quick and precise character movement (eg: super meat boy, shank...) What makes those games responsive? and what prevents other games from achieving the same? How much of it is due to the game framework used to queue mouse/keyboard events and render/update the game and how much is attributed to better coding?

    Read the article

  • How can I deal with actor translations and other "noise" in third-party motion capture data?

    - by Charles
    I'm working on a game, and I've run into a problem with motion capture data. My team is using 3DS Max 2011 and trying to put free motion capture files on our models. The problem we're having is it has become extremely hard to find motion capture data that stays in place. We've found some great motion captures of things like walking and jumping but the actors themselves move within the data, so when we attach these animations to our models and bring them into XNA, the models walk forward even when they should technically be standing still (and then there's also the problem of them resetting at the end of the animation). How can we clean up, at runtime or asset-processing time, the animation in these motion capture files?

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >