Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 372/774 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Error in Print Function in Bubble Sort MIPS?

    - by m00nbeam360
    Sorry that this is such a long block of code, but do you see any obvious syntax errors in this? I feel like the problem is that the code isn't printing correctly since the sort and swap methods were from my textbook. Please help if you can! .data save: .word 1,2,4,2,5,6 size: .word 6 .text swap: sll $t1, $a1, 2 #shift bits by 2 add $t1, $a1, $t1 #set $t1 address to v[k] lw $t0, 0($t1) #load v[k] into t1 lw $t2, 4($t1) #load v[k+1] into t1 sw $t2, 0($t1) #swap addresses sw $t0, 4($t1) #swap addresses jr $ra #return sort: addi $sp, $sp, -20 #make enough room on the stack for five registers sw $ra, 16($sp) #save the return address on the stack sw $s3, 12($sp) #save $s3 on the stack sw $s2, 8($sp) #save Ss2 on the stack sw $s1, 4($sp) #save $s1 on the stack sw $s0, 0($sp) #save $s0 on the stack move $s2, $a0 #copy the parameter $a0 into $s2 (save $a0) move $s3, $a1 #copy the parameter $a1 into $s3 (save $a1) move $s0, $zero #start of for loop, i = 0 for1tst: slt $t0, $s0, $s3 #$t0 = 0 if $s0 S $s3 (i S n) beq $t0, $zero, exit1 #go to exit1 if $s0 S $s3 (i S n) addi $s1, $s0, -1 #j - i - 1 for2tst: slti $t0, $s1, 0 #$t0 = 1 if $s1 < 0 (j < 0) bne $t0, $zero, exit2 #$t0 = 1 if $s1 < 0 (j < 0) sll $t1, $s1, 2 #$t1 = j * 4 (shift by 2 bits) add $t2, $s2, $t1 #$t2 = v + (j*4) lw $t3, 0($t2) #$t3 = v[j] lw $t4, 4($t2) #$t4 = v[j+1] slt $t0, $t4, $t3 #$t0 = 0 if $t4 S $t3 beq $t0, $zero, exit2 #go to exit2 if $t4 S $t3 move $a0, $s2 #1st parameter of swap is v(old $a0) move $a1, $s1 #2nd parameter of swap is j jal swap #swap addi $s1, $s1, -1 j for2tst #jump to test of inner loop j print exit2: addi $s0, $s0, 1 #i = i + 1 j for1tst #jump to test of outer loop exit1: lw $s0, 0($sp) #restore $s0 from stack lw $s1, 4($sp) #resture $s1 from stack lw $s2, 8($sp) #restore $s2 from stack lw $s3, 12($sp) #restore $s3 from stack lw $ra, 16($sp) #restore $ra from stack addi $sp, $sp, 20 #restore stack pointer jr $ra #return to calling routine .data space:.asciiz " " # space to insert between numbers head: .asciiz "The sorted numbers are:\n" .text print:add $t0, $zero, $a0 # starting address of array add $t1, $zero, $a1 # initialize loop counter to array size la $a0, head # load address of print heading li $v0, 4 # specify Print String service syscall # print heading out: lw $a0, 0($t0) # load fibonacci number for syscall li $v0, 1 # specify Print Integer service syscall # print fibonacci number la $a0, space # load address of spacer for syscall li $v0, 4 # specify Print String service syscall # output string addi $t0, $t0, 4 # increment address addi $t1, $t1, -1 # decrement loop counter bgtz $t1, out # repeat if not finished jr $ra # return

    Read the article

  • 3ds Max CAT to XNA

    - by user12214
    Has anyone successfully used the CAT bone system in 3ds Max and exported the file into XNA? If so, what was your method of doing so? There are a number of methods of doing this apparently, but the ones I've tried have not worked. I used the Panda Exporter which creates a .X file. This seems to be the latest way of going about this, but when it's loaded in XNA, there is an error saying something about the bone weights. This happens when I export with and without CAT bones.

    Read the article

  • Jittery Movement, Uncontrollably Rotating + Front of Sprite?

    - by Vipar
    So I've been looking around to try and figure out how I make my sprite face my mouse. So far the sprite moves to where my mouse is by some vector math. Now I'd like it to rotate and face the mouse as it moves. From what I've found this calculation seems to be what keeps reappearing: Sprite Rotation = Atan2(Direction Vectors Y Position, Direction Vectors X Position) I express it like so: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X); If I just go with the above, the sprite seems to jitter left and right ever so slightly but never rotate out of that position. Seeing as Atan2 returns the rotation in radians I found another piece of calculation to add to the above which turns it into degrees: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X) * 180 / PI; Now the sprite rotates. Problem is that it spins uncontrollably the closer it comes to the mouse. One of the problems with the above calculation is that it assumes that +y goes up rather than down on the screen. As I recorded in these two videos, the first part is the slightly jittery movement (A lot more visible when not recording) and then with the added rotation: Jittery Movement So my questions are: How do I fix that weird Jittery movement when the sprite stands still? Some have suggested to make some kind of "snap" where I set the position of the sprite directly to the mouse position when it's really close. But no matter what I do the snapping is noticeable. How do I make the sprite stop spinning uncontrollably? Is it possible to simply define the front of the sprite and use that to make it "face" the right way?

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • XNA Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

  • Turn Based Event Algorithm

    - by GamersIncoming
    I'm currently working on a small roguelike in XNA, which sees the player in a randomly generated series of dungeons fending off creeps, as you might expect. As with most roguelikes, the player makes a move, and then each of the creeps currently active on screen will make a move in turns, until all creeps have updated, and it return's to the player's go. On paper, the simple algorithm is straightforward: Player takes turn Turn Number increments For each active creep, update Position Once all active creeps have updated, allow player to take next turn However, when it comes to actually writing this in more detail, the concept becomes a bit more tricky for me. So my question comes as this: what is the best way to handle events taking turns to trigger, where the completion of each last event triggers the next, when dealing with a large number of creeps (probably stored as an array of an enemy object), and is there an easier way to create some kind of engine that just takes all objects that need updating and chains them together so their updates follow suit? I'm not asking for code, just algorithms and theory in the direction of objects triggering updates one after the other, in a turn based manner. Thanks in advance. Edited: Here's the code I currently have that is horrible :/ if (player.getTurnOver() && updateWait == 0) { if (creep[creepToUpdate].getActive()) { creep[creepToUpdate].moveObject(player, map1); updateWait = 10; } if (creepToUpdate < creep.Length -1) { creepToUpdate++; } else { creepToUpdate = 0; player.setTurnOver(false); } } if (updateWait > 0) { updateWait--; }

    Read the article

  • Basic 3D Collision detection in XNA 4.0

    - by NDraskovic
    I have a problem with detecting collision between 2 models using BoundingSpheres in XNA 4.0. The code I'm using i very simple: private bool IsCollision(Model model1, Matrix world1, Model model2, Matrix world2) { for (int meshIndex1 = 0; meshIndex1 < model1.Meshes.Count; meshIndex1++) { BoundingSphere sphere1 = model1.Meshes[meshIndex1].BoundingSphere; sphere1 = sphere1.Transform(world1); for (int meshIndex2 = 0; meshIndex2 < model2.Meshes.Count; meshIndex2++) { BoundingSphere sphere2 = model2.Meshes[meshIndex2].BoundingSphere; sphere2 = sphere2.Transform(world2); if (sphere1.Intersects(sphere2)) return true; } } return false; } The problem I'm getting is that when I call this method from the Update method, the program behaves as if this method always returns true value (which of course is not correct). The code for calling is very simple (although this is only the test code): if (IsCollision(model1, worldModel1, model2, worldModel2)) { Window.Title = "Intersects"; } What is causing this?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • Box2D relations

    - by Valentino Ru
    As far as I know, the unit in Box2D is meters. When I use Box2D in Processing with JBox2D, I set the "world size" as the window size specified in the setup(). Now I'm wondering if there is any function that scales down the world. For example, how can I simulate the throw of tennis ball within a room, without using a window of only 5 x 5 pixels? Additionally, is there any good documentation like the Java API?

    Read the article

  • How does Minecraft render its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Calculate vector direction

    - by Starkers
    Is the direction angle always measured from the plus x axis? Does a vector in the +,+ quadrant always have a direction between 0 and 90, and in -,+ between 90 and 180 and in -,- between 180 and 270 and in -,+ between 270 and 360 ? Also, how should we calculate the direction using tan? Would that mean nested if statements to find out what quadrant we're in, and then applying the appropriate "work arounds"? E.g. If we were in the -,+ (like in the diagram) would we find the angle from the + axis would be 90 + tan^-1(y/x), the 90 + only used because we're in the -,+ quadrant. Also, that's just a quick solution, may be off, I just want to know if we use nested if statements to get the angle from the + x axis. Finally, should we find the distance in degrees or radians?

    Read the article

  • Unable to find good parameters for behavior of a puck in Farseer

    - by Krumelur
    EDIT: I have tried all kinds of variations now. The last one was to adjust the linear velocity in each step: newVel = oldVel * 0.9f - all of this including your proposals kind of work, however in the end if the velocity is little enough, the puck sticks to the wall and slides either vertically or horizontally. I can't get rid of this behavior. I want it to bounce, no matter how slow it is. Retitution is 1 for all objects, Friction is 0 for all. There is no gravity. What am I possibly doing wrong? In my battle to learn and understand Farseer I'm trying to setup a simple Air Hockey like table with a puck on it. The table's border body is made up from: Vertices aBorders = new Vertices( 4 ); aBorders.Add( new Vector2( -fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, -fHalfHeight ) ); aBorders.Add( new Vector2( -fHalfWidth, -fHalfHeight ) ); FixtureFactory.AttachLoopShape( aBorders, this ); this.CollisionCategories = Category.All; this.CollidesWith = Category.All; this.Position = Vector2.Zero; this.Restitution = 1f; this.Friction = 0f; The puck body is defined with: FixtureFactory.AttachCircle( DIAMETER_PHYSIC_UNITS / 2f, 0.5f, this ); this.Restitution = 0.1f; this.Friction = 0.5f; this.BodyType = FarseerPhysics.Dynamics.BodyType.Dynamic; this.LinearDamping = 0.5f; this.Mass = 0.2f; I'm applying a linear force to the puck: this.oPuck.ApplyLinearImpulse( new Vector2( 1f, 1f ) ); The problem is that the puck and the walls appear to be sticky. This means that the puck's velocity drops to zero to quickly below a certain velocity. The puck gets reflected by the walls a couple of times and then just sticks to the left wall and continues sliding downwards the left wall. This looks very unrealistic. What I'm really looking for is that a puck-wall-collision does slow down the puck only a tiny little bit. After tweaking all values left and right I was wondering if I'm doing something wrong. Maybe some expert can comment on the parameters?

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • How to set orthgraphic matrix for a 2d camera with zooming?

    - by MahanGM
    I'm using ID3DXSprite to draw my sprites and haven't set any kind of camera projection matrix. How to setup an orthographic projection matrix for camera in DirectX which it would be able to support zoom functionality? D3DXMATRIX orthographicMatrix; D3DXMATRIX identityMatrix; D3DXMatrixOrthoLH(&orthographicMatrix, nScreenWidth, nScreenHeight, 0.0f, 1.0f); D3DXMatrixIdentity(&identityMatrix); device->SetTransform(D3DTS_PROJECTION, &orthographicMatrix); device->SetTransform(D3DTS_WORLD, &identityMatrix); device->SetTransform(D3DTS_VIEW, &identityMatrix); This code is for initial setup. Then, for zooming I multiply zoom factor in nScreenWidth and nScreenHeight.

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >