Search Results

Search found 14551 results on 583 pages for 'game history'.

Page 354/583 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4

    - by danbystrom
    Beginner question. Synopsis: my water effects does something that causes the drawing of my sky sphere to throw an exeption when run in full screen. The exception is: XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4. This happens both when I start in full screen directly or switch to full screen from windowed. It does NOT happen, however, if I comment out the drawing of my water. So, what in my water effect can possibly cause the drawing of my sky sphere to choke???

    Read the article

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • Why isnt my data persisting with nskeyedarchiver?

    - by aking63
    Im just working on what should be the "finishing touches" of my first iPhone game. For some reason, when I save with NSKeyedArchiver/Unarchiver, the data seems to load once and then gets lost or something. Here's what I've been able to deduce: When I save in this viewController, pop to the previous one, and then push back into this one, the data is saved and prints as I want it to. But when I save in this viewController, then push a new one and pop back into this one, the data is lost. Any idea why this might be happening? Do I have this set up all wrong? I copied it from a book months ago. Here's the methods I use to save and load. - (void) saveGameData { NSLog(@"LS:saveGameData"); // SAVE DATA IMMEDIATELY NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *gameStatePath = [documentsDirectory stringByAppendingPathComponent:@"gameState.dat"]; NSMutableData *gameSave= [NSMutableData data]; NSKeyedArchiver *encoder = [[NSKeyedArchiver alloc] initForWritingWithMutableData:gameSave]; [encoder encodeObject:categoryLockStateArray forKey:kCategoryLockStateArray]; [encoder encodeObject:self.levelsPlist forKey:@"levelsPlist"]; [encoder finishEncoding]; [gameSave writeToFile:gameStatePath atomically:YES]; NSLog(@"encoded catLockState:%@",categoryLockStateArray); } - (void) loadGameData { NSLog(@"loadGameData"); // If there is a saved file, perform the load NSMutableData *gameData = [NSData dataWithContentsOfFile:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:@"gameState.dat"]]; // LOAD GAME DATA if (gameData) { NSLog(@"-Loaded Game Data-"); NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:gameData]; self.levelsPlist = [unarchiver decodeObjectForKey:@"levelsPlist"]; categoryLockStateArray = [unarchiver decodeObjectForKey:kCategoryLockStateArray]; NSLog(@"decoded catLockState:%@",categoryLockStateArray); } // CREATE GAME DATA else { NSLog(@"-Created Game Data-"); self.levelsPlist = [[NSMutableDictionary alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:kLevelsPlist ofType:@"plist"]]; } if (!categoryLockStateArray) { NSLog(@"-Created categoryLockStateArray-"); categoryLockStateArray = [[NSMutableArray alloc] initWithCapacity:[[self.levelsPlist allKeys] count]]; for (int i=0; i<[[self.levelsPlist allKeys] count]; i++) { [categoryLockStateArray insertObject:[NSNumber numberWithBool:FALSE] atIndex:i]; } } // set the properties of the categories self.categoryNames = [self.levelsPlist allKeys]; NUM_CATEGORIES = [self.categoryNames count]; thisCatCopy = [[NSMutableDictionary alloc] initWithDictionary:[[levelsPlist objectForKey:[self.categoryNames objectAtIndex:pageControl.currentPage]] mutableCopy]]; NUM_FINISHED = [[thisCatCopy objectForKey:kNumLevelsBeatenInCategory] intValue]; }

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • XNA - positioning after rotation

    - by DijkeMark
    I have a turret with a 2 gunbarrels. The turret rotates towards my mouse. So far no problem. When it creates a few bullets and positions them at the end of the gun barrels. Here is the problem. It only works the moment the gun is point upwards. The moment it rotates the end of the gun barrels have moved ofcourse, thus the bullets don't spawn at the end of the gun battels, but at the place the where the gun barrels are when the turret is pointing upwards. How can I check where the end of the gun barrels are the moment it rotates? Thanks in Advance, Mark Dijkema PS. If you need code please let me know, I didn't post any yet, because I didn't what code you would need.

    Read the article

  • Rotate an image and get back to its original position - opengles glkit

    - by Manoj
    I need to rotate an image in opengles GLkit and get it back to its original position in GLkit. rotation += 5; _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(5), 1, 0, 0); _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(rotation), 1,0,0); I need to move it in x axis for certain amount and getting back to its original position from where it started. How should i do it?

    Read the article

  • XNA: Retrieve texture file name during runtime

    - by townsean
    I'm trying to retrieve the names of the texture files (or their locations) on a mesh. I realize that the texture file name information is not preserved when the model is loaded. I've been doing tons of searching and some experimenting but I've been met with no luck. I've gathered that I need to extended the content pipeline and store the file location in somewhere like ModelMeshPart.Tag. My problem is, even when I'm trying to make my own custom processor, I still can't figure out where the texture file name is. :( Any thoughts? Thanks! UPDATE: Okay, so I found something kind of promising. NodeContent.Identity.SourceFilename, only that returns the location of my .X model. When I go down the node tree he is always null. Then there's the ContentItem.Name property. It seems to have names of my mesh, but not my actual texture file names. :(

    Read the article

  • Is IDirectInput8::FindDevice totally broken on Windows 7?

    - by Noora
    I'm developing on Windows 7, and using DirectInput8 for my input needs. I'm tracking gamepad additions and removals (that is, GUID_DEVINTERFACE_HID devices) using the DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE messages, which works fine. However, what I've come to find out is that no matter what I do, passing the received values from DBT_DEVICEARRIVAL to IDirectInput8's FindDevice method, it will always fail to identify the device, returning DIERR_DEVICENOTREG. DirectInput still clearly knows about the device, because I can enumerate and create it just fine. I've tried with three different gamepads, and the error persists, so it's not about that either. I also tried passing some alternative interface GUIDs for the RegisterDeviceNotification call, didn't help. So, has anyone else faced the same problem, and have you found a usable workaround? I'm afraid I'll soon have to stoop down to re-enumerating all devices when something is added or removed, but I'll first give this question one last shot here. EDIT: For the record, I've also tried pretty much every single HID API & SetupAPI function for alternative ways of figuring out the needed GUIDs, with zero success. So if you're facing the same problem as me, don't bother with that route. I'm pretty sure those GUIDs are made up by DirectInput itself somehow. Short of reverse engineering dinput8.dll, I'm truly out of ideas now.

    Read the article

  • How can I read from multiple textures in an OpenGL ES 2 shader?

    - by Peyman Tahghighi
    How can I enable more than one texture in OpenGL ES 2 so that I can sample from all of them in my shader? For example, I'm trying to read from two different textures in my shader for the player's car. This is how I'm currently dealing with the texture for my car: glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, this->texture2DObj); glUniform1i(1, 0); glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer); glEnableVertexAttribArray(0); int offset = 0; glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, this->vertexBufferSize,(const void *)offset); offset += 3 * sizeof(GLfloat); glEnableVertexAttribArray(1); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, this->vertexBufferSize, (const void*)offset); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this->indexBuffer); glDrawElements(GL_TRIANGLES, this->indexBufferSize, GL_UNSIGNED_SHORT, 0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1);

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • What is the most serious limitation of Unity?

    - by ashes999
    Having read this heated question about Unity vs. UDK vs. ID something, I'm curious to know: what the repeatedly-hit, most crippling limitation(s) of Unity? In order to keep this question non-subjective, again, I'm talking about the top repeated offender(s) of Unity are. This is something that, as a Unity user, you really wish someone had told you about before you started using it. I have heard from someone that Unity does not deal well with version control, since it generates a lot of binary files (which are un-diffable). This, to me, is not really crippling as I work alone. Thoughts?

    Read the article

  • Getting Center and Radius of Irregural Object

    - by Moaz ELdeen
    I have drawn an asteroid object manually , and would like to get its center/radius by a specific equation. I think I can get them by calculated and hard-coded values. The code to draw the asteroid: void Asteroid::Draw() { float ratio = app::getWindowWidth()/app::getWindowHeight(); gl::pushMatrices(); gl::translate(m_Pos*ratio); gl::scale(3.5*ratio,3.5*ratio,3.5*ratio); gl::color(ci::Color(1,1,1)); gl::drawLine(Vec2f(-15,0),Vec2f(-10,-5)); gl::drawLine(Vec2f(-10,-5),Vec2f(-5,-5)); gl::drawLine(Vec2f(-5,-5),Vec2f(-5,-8)); gl::drawLine(Vec2f(-5,-8),Vec2f(5,-8)); gl::drawLine(Vec2f(5,-8),Vec2f(5,-5)); gl::drawLine(Vec2f(5,-5),Vec2f(10,-5)); gl::drawLine(Vec2f(10,-5),Vec2f(15,0)); gl::drawLine(Vec2f(15,0),Vec2f(10,5)); gl::drawLine(Vec2f(10,5),Vec2f(-10,5)); gl::drawLine(Vec2f(-10,5),Vec2f(-10,5)); gl::drawLine(Vec2f(-15,0),Vec2f(-10,5)); gl::popMatrices(); } According to the answer I have written that code to calculate the radius, is it correct or not ? cinder::Vec2f Asteroid::getCenter() { return ci::Vec2f(m_Pos.x, m_Pos.y); } double Asteroid::getRadius() { ci::Vec2f _vec = (getCenter()- Vec2f(15,5)); return _vec.length()*0.3f; }

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • Basic modelling of radar

    - by Hawk66
    I'm currently researching how to model/simulate radar for my naval simulation. Since the emphasis is on modelling ASW or submarines in general, I need only a basic radar model - at least for the beginning. So, does anybody know a resource for such a simple model? The model should take signal strength of the sensor, the size of the target and the terrain (height/ground clutter) into account. Thanks.

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • How to calculate new velocities between resting objects (AABB) after accelerations?

    - by Tiedye
    lately I have been trying to create a 2D platformer engine in C++ with Direct2D. The problem I am currently having is getting objects that are resting against each other to interact correctly after accelerations like gravity have been applied to them. Right now I can detect collisions and respond to them correctly (I think) and when objects collide they remember what other objects they're resting against so objects can be pushed by other objects (note that there is no bounce in any collisions so when objects collide they are guaranteed to become resting until something else happens). Every time the simulation advances, the acceleration for objects is applied to their velocities (for example vx += ax * t, where t is time elapsed since last advancement). After these accelerations are applied, I want to check if any objects that are resting against each other are moving at different speeds than their counterparts (as different objects can have different accelerations) and depending on that difference either unlink the two objects so they are no longer resting, or even out their velocities so they are moving at the same speed once again. I am having trouble creating an algorithm that can do this across many resting objects. Here's a diagram to help explain my problem

    Read the article

  • Speed up content loading

    - by user1806687
    I am using WinForms Sample downloaded from microsoft website. The problem is, that the model loading time is quite long, using: contentBuilder.Add(ModelPath, ModelName, null, "ModelProcessor"); contentManager.Load<Model>(ModelName); even a simple model, such as a cube with no textures, takes 4+ seconds to load. Now, I am no expert on this, but is there anyway to decrease loading time? EDIT: I've gone thru the code and found out that calling contentBuilder.Build(); ,which comes right after contentBuilder.Add() method takes up most of the time.

    Read the article

  • Cocos-2D asteroids style movement (iOS)

    - by bwheeler96
    So I have a CCSprite subclass, well call this Spaceship. Spaceship needs to move on a loop, until I say othersise by calling a method. The method should look something like - (void)moveForeverAtVelocity { // method logic } The class Spaceship has two relevant iVars, resetPosition and targetPosition, the target is where we are headed, the reset is where we set to when we've hit our target. If they are both off-screen this creates a permanent looping effect. So for the logic, I have tried several things, such as CCMoveTo *move = [CCMoveTo actionWithDuration:2 position:ccp(100, 100)]; CCCallBlockN *repeat = [CCCallBlockN actionWithBlock: ^(CCNode *node) { [self moveForeverAtVelocity]; }]; [self runAction:[CCSequence actions: move, repeat, nil]]; self.position = self.resetPosition; recursively calling the moveForeverAtVelocity method. This is psuedo-code, so its not perfect. I have hard-coded some of the values for the sake of simplicity. Enough garble: The problem I am having, how can I make a method that loops forever, but can be called and reset at will. I'm running into issues with creating multiple instances of this method. If you can offer any assistance with creating this effect, that would be appreciated.

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • Reflection velocity

    - by MindSeeker
    I'm trying to get a moving circular object to bounce (elastically) off of an immovable circular object. Am I doing this right? (The results look right, but I hate to trust that alone, and I can't find a tutorial that tackles this problem and includes the nitty gritty math/code to verify what I'm doing). If it is right, is there a better/faster/more elegant way to do this? Note that the object (this) is the moving circle, and the EntPointer object is the immovable circle. //take vector separating the two centers <x, y>, and then get unit vector of the result: MathVector2d unitnormal = MathVector2d(this -> Retxpos() - EntPointer -> Retxpos(), this -> Retypos() - EntPointer -> Retypos()).UnitVector(); //take tangent <-y, x> of the unitnormal: MathVector2d unittangent = MathVector2d(-unitnormal.ycomp, unitnormal.xcomp); MathVector2d V1 = MathVector2d(this -> Retxvel(), this -> Retyvel()); //Calculate the normal and tangent vector lengths of the velocity: (the normal changes, the tangent stays the same) double LengthNormal = DotProduct(unitnormal, V1); double LengthTangent = DotProduct(unittangent, V1); MathVector2d VelVecNewNormal = unitnormal.ScalarMultiplication(-LengthNormal); //the negative of what it was before MathVector2d VelVecNewTangent = unittangent.ScalarMultiplication(LengthTangent); //this stays the same MathVector2d NewVel = VectorAddition(VelVecNewNormal, VelVecNewTangent); //combine them xvel = NewVel.xcomp; //and then apply them yvel = NewVel.ycomp; Note also that this question is just about velocity, the position code is handled elsewhere (in other words, assume that this code is implemented at the exact moment that the circles begin to overlap). Thanks in advance for your help and time!

    Read the article

  • How do I render an animation where some frames appear twice?

    - by hustlerinc
    I am animating a sprite. The sprite has 7 different frames, but the animation is 10 frames long. This is because 3 of the original frames appear twice in the animation: 3 -> 4 -> 5 -> 6 -> 4 -> 3 -> 2 -> 1 -> 0 -> 2 Frames 2, 3 and 4 appear twice. This avoids having to store duplicate frames in the spritesheet. How can I render the animation in this sequence with repeated frames?

    Read the article

  • Drawing two orthogonal strings in 3d space in Android Canvas?

    - by hasanghaforian
    I want to draw two strings in canvas.First string must be rotated around Y axis,for example 45 degrees.Second string must be start at the end of first string and also it must be orthogonal to first string. This is my code: String text = "In the"; float textWidth = redPaint.measureText(text); Matrix m0 = new Matrix(); Matrix m1 = new Matrix(); Matrix m2 = new Matrix(); mCamera = new Camera(); canvas.setMatrix(null); canvas.save(); mCamera.rotateY(45); mCamera.getMatrix(m0); m0.preTranslate(-100, -100); m0.postTranslate(100, 100); canvas.setMatrix(m0); canvas.drawText(text, 100, 100, redPaint); mCamera = new Camera(); mCamera.rotateY(90); mCamera.getMatrix(m1); m1.preTranslate(-textWidth - 100, -100); m1.postTranslate(textWidth + 100, 100); m2.setConcat(m1, m0); canvas.setMatrix(m2); canvas.drawText(text, 100 + textWidth, 100, greenPaint); But in result,only first string(text with red font)is visible. How can I do drawing two orthogonal strings in 3d space?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >