Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 530/1758 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • How can player actions be "judged morally" in a measurable way?

    - by Sebastien Diot
    While measuring the player "skills" and "effort" is usually easy, adding some "less objective" statistics can give the player supplementary goals, especially in a MUD/RPG context. What I mean is that apart from counting how many orcs were killed, and gems collected, it would be interesting to have something along the line of the traditional Good/Evil, Lawful/Chaotic ranking of paper-based RPG, to add "dimension" to the game. But computers cannot differentiate good/evil effectively (nor can humans in many cases), and if you have a set of "laws" which are precise enough that you can tell exactly when the player breaks them, then it generally makes more sense to actually prevent them from doing that action in the first place. One example could be the creation/destruction axis (if players are at all allowed to create/build things), possibly in the form of the general effect of the player actions on "ecology". So what else is there left that can be effectively measured and would provide a sense of "moral" for the player? The more axis I have to measure, the more goals the player can have, and therefore the longer the game can last. This also gives the players more ways of "differentiating" themselves among hordes of other players of the same "class" and similar "kit".

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • Is there any hueristic to polygonize a closed 2d-raster shape with n triangles?

    - by Arthur Wulf White
    Lets say we have a 2d image black on white that shows a closed geometric shape. Is there any (not naive brute force) algorithm that approximates that shape as closely as possible with n triangles? If you want a formal definition for as closely as possible: Approximate the shape with a polygon that when rendered into a new 2d image will match the largest number of pixels possible with the original image.

    Read the article

  • How can I apply different actions to different parts of a 2D character?

    - by Praveen Sharath
    I am developing a 2D platform game in Java. The player has a gun in his hand every time. He needs to walk and shoot with the gun(arrow keys for walk and X key to shoot). The walk cycle takes 6 frames and i am able to import the sprite sheet and animate the sequence when I press arrow key. But i need to add the gun motion. The player holds the gun upwards and when X key is pressed he brings it straight and shoots. How to implement the walk + shoot action?

    Read the article

  • GLSL Shader Effects: How to do motion blur, etc?

    - by DevilWithin
    I am not sure how right it is to ask this question, but still here it goes. I have a full 2D environment, with sprites going around as landscape, characters, etc And to make it more state-of-art looking, i want to implement a motion blur effect, similar to modern FPS's (i.e. crysis) blur when moving fast the camera. In a sidescroller, the desired effect is having this slight blur appearing to give the idea of fast movement, when the camera is moving. If anyone could give me some tips on doing this, im assuming in a pixel shader, i'd be grate. Also, if anyone has other good tips on cool pixel shader effects for 2D games it would be awesome, like some stylizing post fx, such as previous Prince of Persia illustrative style. Thanks

    Read the article

  • Right way to create [self]respawning app in python

    - by grapescan
    I am using jabber bot written in python to log some MUC talks. Sometimes it drops on some network or XMPP problems. In this case I have to start it again by myself. The goal is to make it "self-respawning". I have some variants about how to do it. Bot is one process. Another process monitors its activity and starts it if bot died. Main process spawns bot subprocess and controls it. Also I think daemonizing bot process is useful here. Platform is Linux, as you could guess. What is the right way to solve this problem?

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • How do I determine the draw order in an isometric view flash game?

    - by Gajet
    This is for a flash game, with isometric view. I need to know how to sort object so that there is no need for z-buffer checking when drawing. This might seem easy but there is another restriction, a scene can have 10,000+ objects so the algorithm needs to be run in less than O(n^2). All objects are rectangular boxes, and there are 3-4 objects moving in the scene. What's the best way to do this? UPDATE in each tile there is only object (I mean objects can stack on top of each other). and we access to both map of Objects and Objects have their own position.

    Read the article

  • Is there any simple game that involves psychological factors?

    - by Roman
    I need to find a simple game in which several people need to interact with each other. The game should be simple for an analysis (it should be simple to describe what happens in the game, what players did). Because of the last reason, the video games are not appropriate for my purposes. I am thinking of a simple, schematic, strategic game where people can make a limited set of simple moves. Moreover, the moves of the game should be conditioned not only by a pure logic (like in chess or go). The behavior in the game should depend on psychological factors, on relations between people. In more details, I think it should be a cooperation game where people make their decisions based on mutual trust. It would be nice if players can express punishment and forgiveness in the game. Does anybody knows a game that is close to what I have described above? ADDED I need to add that I need a game where actions of players are simple and easy to formalize. Because of that I cannot use verbal games (where communication between players is important). By simple actions I understand, for example, moves on the board from one position to another one, or passing chips from one player to another one and so on.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • Rotate an image in a scaled context

    - by nathan
    Here is my working piece of code to rotate an image toward a point (in my case, the mouse cursor). float dx = newx - ploc.x; float dy = newy - ploc.y; float angle = (float) Math.toDegrees(Math.atan2(dy, dx)); Where ploc is the location of the image i'm rotating. And here is the rendering code: g.rotate(loc.x + width / 2, loc.y + height / 2, angle); g.drawImage(frame, loc.x, loc.y); Where loc is the location of the image and "width" and "height" are respectively the width and height of the image. What changes are needed to make it works on a scaled context? e.g make it works with something like g.scale(sx, sy).

    Read the article

  • How can I create and animate 2D skeletons for HTML5 Javascript games? [on hold]

    - by user414209
    I'm trying to make a 2D fighting game in HTML5(somewhat like street fighter). So basically there are two players, one AI and one Human. The players need to have animations for the body movements. Also, there needs to be some collision detection system. I'm using createjs for coding but to design models/objects/animations, I need some other software. So I'm looking for a software that can: easily make custom animation of 2d objects. The objects structure(skeleton etc.) will be same once defined but need to be defined once. Can export the animations and models in a js readable format(preferably json) Collision detection can be done easily after the exported format is loaded in a game engine. For point 1, I'm looking for some generic skeleton based animation. Sprite-sheet based animations will be difficult for collision detection.

    Read the article

  • Tweaking AStar to find closest location to unreachable destination

    - by Shivan Dragon
    I've implemented AStar in Java and it works ok for an area with obstacles where the chosen destination is reachable. However, when the destination is unreachable, the calculated "path" is in no way to the closest location (to the unreachable location) but is instead some random path. Is there a feasible way to tweak AStar into finding the path to the closest location to an unreachable destination?

    Read the article

  • how to create texture for modelmesh?

    - by Berend
    Is there a possibiltiy to create a texture from a meshpart in xna. By getting a flat version of the mesh. So I can create a texture for it and edit that texture(via rendertarget)? I need to get the texture(wich is not yet a texture) so I can put another texture on it. I can create a texture and put it on a certain mesh. But I just cant figure out how I can create a texture with the right size. I also already found out i can use text2dproj in hlsl. But when i do this i get a gray stripe in the look. Is there a better solution?

    Read the article

  • Sound not playing on Windows XP - SoundEffect or Song: Monogame

    - by ashes999
    I'm trying to integrate sound into my Monogame game. I don't have the content pipeline hack -- just straight Monogame (Beta 3) at this point. (I tried adding the content pipeline, but ran into some issues.) I added a .wav file to my /Content directory, and I can create and instantiate both SoundEffect and Song classes. However, both show durations of 00:00:00 (on a ten-second long file), and neither plays. I can call LoadContent without any issue. But when I call Play, nothing plays. I've tried a couple of different sounds, and different formats (MP3 and WAV) to rule that out. Only WAV seems to even load without crashing out, but it doesn't play. There seems to be a GitHub issue that fixes this problem in 2.5.1. Downgrading to 2.5.1 doesn't fix this problem; it seems like it's fixed in 3.0 (_data is set in the SoundEffect instance). This issue only occurs on Windows XP. I tested it on a Windows 7 laptop, and the sound plays fine.

    Read the article

  • How to display a hierarchical skill tree in php

    - by user3587554
    If I have skill data set up in a tree format (where earlier skills are prerequisites for later ones), how would I display it as a tree, using php? The parent would be on top and have 3 children. Each of these children can then have one more child so its parent would be directly above it. I'm having trouble figuring out how to add the root element in the middle of the top div, and the child of the children below each child of the root. I'm not looking for code, but an explanation of how to do it. My data in array form is this: Data: Array ( [1] => Array ( [id] => 1 [title] => Jutsu [description] => Skill that makes you awesomer at using ninjutsu [tiers] => 1 [prereq] => [image] => images/skills/jutsu.png [children] => Array ( [2] => Array ( [id] => 2 [title] => fireball [description] => Increase your damage with fire jutsu and weapons [tiers] => 5 [prereq] => 1 [image] => images/skills/fireball.png [children] => Array ( [5] => Array ( [id] => 5 [title] => pin point [description] => Increases jutsu accuracy [tiers] => 5 [prereq] => 2 [image] => images/skills/pinpoint.png ) ) ) [3] => Array ( [id] => 3 [title] => synergy [description] => Reduce the amount of chakra needed to use ninjutsu [tiers] => 1 [prereq] => 1 [image] => images/skills/synergy.png ) [4] => Array ( [id] => 4 [title] => ebb & flow [description] => Increase the damage of water jutsu, water weapons, and reduce the damage of jutsu and weapons that use water element [tiers] => 5 [prereq] => 1 [image] => images/skills/ebbandflow.png [children] => Array ( [6] => Array ( [id] => 6 [title] => IQ [description] => Decrease the time it takes to learn a jutsu [tiers] => 5 [prereq] => 4 [image] => images/skills/iq.png ) ) ) ) ) ) An example would be this demo image minus the hover stuff.

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • Initializing OpenFeint for Android outside the main Application

    - by Ef Es
    I am trying to create a generic C++ bridge to use OpenFeint with Cocos2d-x, which is supposed to be just "add and run" but I am finding problems. OpenFeint is very exquisite when initializing, it requires a Context parameter that MUST be the main Application, in the onCreate method, never the constructor. Also, the main Apps name must be edited into the manifest. I am trying to fix this. So far I have tried to create a new Application that calls my Application to test if just the type is needed, but you do really need the main Android application. I also tried using a handler for a static initialization but I found pretty much the same problem. Has anybody been able to do it? This is my working-but-not-as-intended code snippet public class DerpHurr extends Application{ @Override public void onCreate() { super.onCreate(); initializeOpenFeint("TestApp", "edthedthedthedth", "aeyaetyet", "65462"); } public void initializeOpenFeint(String appname, String key, String secret, String id){ Map<String, Object> options = new HashMap<String, Object>(); options.put(OpenFeintSettings.SettingCloudStorageCompressionStrategy, OpenFeintSettings.CloudStorageCompressionStrategyDefault); OpenFeintSettings settings = new OpenFeintSettings(appname, key, secret, id, options); //RIGHT HERE OpenFeint.initialize(***this***, settings, new OpenFeintDelegate() { }); System.out.println("OpenFeint Started"); } } Manifest <application android:debuggable="true" android:label="@string/app_name" android:name=".DerpHurr">

    Read the article

  • Trouble with touch events on iPhone

    - by MrDatabase
    I'm making a simple 2D game for iPhone. Think of the game as a ball on the screen that goes up while the user is touching the screen and falls down when the user stops touching the screen. The ball starts moving up in touchesBegan:withEvent and starts moving down in touchesEnded:withEvent. This works fine almost all the time. However on occasion the ball will keep moving up after the user stops touching... or the ball will keep moving down while the user is touching. Why is this happening? Just fyi the ball is drawn on a UIWindow. The taps are handled by a UIImageview subclass that's clearColor and takes up the entire screen. This "touchLayer" is also moved to the front of the window in the game loop. Any idea why this control scheme occasionally fails? Perhaps the touch events just aren't firing? Or they're fired out of order? Cheers!

    Read the article

  • Help needed throwing a ball in AS3

    - by Opoe
    I'm working on a flash game, coding on the time line. What I'm trying to accomplish is the following: With the mouse you swing and throw/release a ball which bounces against the walls and eventualy comes to point where it lays still (like a real ball). I allmost had it working, but now the ball sticks to the mouse, in stead of being released, my question to you is: Can you help me make this work and explain to me what I did wrong? You can simply preview my code by making a movieclip named 'circle' on a 550x400 stage. stage.addEventListener(Event.ENTER_FRAME, circle_update); var previousPostionX:Number; var previousPostionY:Number; var throwSpeedX:Number; var throwSpeedY:Number; var isItDown:Boolean; var xSpeed:Number = 0; var ySpeed:Number = 0; var friction:Number = 0.96; var offsetX:Number = 0; var offsetY:Number = 0; var newY:Number = 0; var oldY:Number = 0; var newX:Number = 0; var oldX:Number = 0; var dragging:Boolean; circle.buttonMode = true; circle.addEventListener(MouseEvent.MOUSE_DOWN, mouseDownHandler); circle.addEventListener(Event.ENTER_FRAME, throwcircle); circle.addEventListener(MouseEvent.MOUSE_DOWN, clicked); circle.addEventListener(MouseEvent.MOUSE_UP, released); function mouseDownHandler(e:MouseEvent):void { dragging = true; stage.addEventListener(MouseEvent.MOUSE_UP, mouseUpHandler); offsetX = mouseX - circle.x; offsetY = mouseY - circle.y; } function mouseUpHandler(e:MouseEvent):void { dragging = false; } function throwcircle(e:Event) { circle.x += xSpeed; circle.y += ySpeed; xSpeed *= friction; ySpeed *= friction; } function changeFriction(e:Event):void { friction = e.target.value; trace(e.target.value); } function circle_update(e:Event){ if ( dragging == true ) { circle.x = mouseX - offsetX; circle.y = mouseY - offsetY; } if(circle.x + (circle.width * 0.50) >= 550){ circle.x = 550 - circle.width * 0.50; } if(circle.x - (circle.width * 0.50) <= 0){ circle.x = circle.width * 0.50; } if(circle.y + (circle.width * 0.50) >= 400){ circle.y = 400 - circle.height * 0.50; } if(circle.y - (circle.width * 0.50) <= 0){ circle.y = circle.height * 0.50; } } function clicked(theEvent:Event) { isItDown =true; addEventListener(Event.ENTER_FRAME, updateView); } function released(theEvent:Event) { isItDown =false; } function updateView(theEvent:Event) { if (isItDown==true){ throwSpeedX = mouseX - previousPostionX; throwSpeedY = mouseY - previousPostionY; circle.x = mouseX; circle.y = mouseY; } else{ circle.x += throwSpeedX; circle.y += throwSpeedY; throwSpeedX *=0.9; throwSpeedY *=0.9; } previousPostionX= circle.x; previousPostionY= circle.y; }

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • Voice artist for a game for kids

    - by devmiles.com
    We're making a game for kids which should include about 50 spoken phrases. I'm asking for help in finding the right voice artist / studio for this. I've tried searching the web but couldn't find anything that would make me sure that it would work for us or games in general. So I'm looking for references from those of you who had a successful collaboration with artists or studios. Any help would be appreciated.

    Read the article

  • how to retain the animated position in opengl es 2.0

    - by Arun AC
    I am doing frame based animation for 300 frames in opengl es 2.0 I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames Then, the animated rectangle has to stay there for the next 100 frames. Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames. I am using simple linear interpolation to calculate the delta-animation value for each frame. Pseudo code: The below drawFrame() is executed for 300 times (300 frames) in a loop. float RectMVMatrix[4][4] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; // identity matrix int totalframes = 300; float translate-delta; // interpolated translation value for each frame float scale-delta; // interpolated scale value for each frame // The usual code for draw is: void drawFrame(int iCurrentFrame) { // mySetIdentity(RectMVMatrix); // comment this line to retain the animated position. mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame(). Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };// 2 pixels translation and 1.01 units scaling after first frame This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also. Eventually output is getting wrong. How to retain the animated position without affecting the current ModelView matrix? =========================================== I got the solution... Thanks to Sergio. I created separate matrices for translation and scaling. e.g.CurrentTranslateMatrix[4][4], CurrentScaleMatrix[4][4]. Then for every frame, I reset 'CurrentTranslateMatrix' to identity and call mytranslate( CurrentTranslateMatrix, translate-delta, X_AXIS) function. I reset 'CurrentScaleMatrix' to identity and call myscale(CurrentScaleMatrix, scale-delta) function. Then, I multiplied these 'CurrentTranslateMatrix' and 'CurrentScaleMatrix' to get the final 'RectMVMatrix' Matrix for the frame. Pseudo Code: float RectMVMatrix[4][4] = {0}; float CurrentTranslateMatrix[4][4] = {0}; float CurrentScaleMatrix[4][4] = {0}; int iTotalFrames = 300; int iAnimationFrames = 100; int iTranslate_X = 200.0f; // in pixels float fScale_X = 2.0f; float scaleDelta; float translateDelta_X; void DrawRect(int iTotalFrames) { mySetIdentity(RectMVMatrix); for (int i = 0; i< iTotalFrames; i++) { DrawFrame(int iCurrentFrame); } } void getInterpolatedValue(int iStartFrame, int iEndFrame, int iTotalFrame, int iCurrentFrame, float *scaleDelta, float *translateDelta_X) { float fDelta = float ( (iCurrentFrame - iStartFrame) / (iEndFrame - iStartFrame)) float fStartX = 0.0f; float fEndX = ConvertPixelsToOpenGLUnit(iTranslate_X); *translateDelta_X = fStartX + fDelta * (fEndX - fStartX); float fStartScaleX = 1.0f; float fEndScaleX = fScale_X; *scaleDelta = fStartScaleX + fDelta * (fEndScaleX - fStartScaleX); } void DrawFrame(int iCurrentFrame) { getInterpolatedValue(0, iAnimationFrames, iTotalFrames, iCurrentFrame, &scaleDelta, &translateDelta_X) mySetIdentity(CurrentTranslateMatrix); myTranslate(RectMVMatrix, translateDelta_X, X_AXIS); // to translate the mv matrix in x axis by translate-delta value mySetIdentity(CurrentScaleMatrix); myScale(RectMVMatrix, scaleDelta); // to scale the mv matrix by scale-delta value myMultiplyMatrix(RectMVMatrix, CurrentTranslateMatrix, CurrentScaleMatrix);// RectMVMatrix = CurrentTranslateMatrix*CurrentScaleMatrix; ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } I maintained this 'RectMVMatrix' value, if there is no animation for the current frame (e.g. 101th frame onwards). Thanks, Arun AC

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >