Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 359/1022 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • ContentManager in XNA cant find any XML

    - by user36385
    Im making a game in XNA 4 and this is the first time I'm using the Content loader to initialize a simple class with a XML file, but no matter how many guide I follow, or how simple or complicated is my XML File the ContentManager cant find the file; the Debug keep telling me: "A first chance exception of type 'Microsoft.Xna.Framework.Content.ContentLoadException' occurred in Microsoft.Xna.Framework.dll". I'm really confuse because I can load SpriteFonts and Texture2D without a problem ... I create the following XML (the most basic Xna XML): <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="System.String">Hello</Asset> </XnaContent> and I try to load it in the LoadContent method in my main class like this: System.String hello = Content.Load<System.String>("NewXmlFile"); There is something I'm doing wrong? I really appreciate your help

    Read the article

  • Timestep schemes for physics simulations

    - by ktodisco
    The operations used for stepping a physics simulation are most commonly: Integrate velocity and position Collision detection and resolution Contact resolution (in advanced cases) A while ago I came across this paper from Stanford that proposed an alternative scheme, which is as follows: Collision detection and resolution Integrate velocity Contact resolution Integrate position It's intriguing because it allows for robust solutions to the stacking problem. So it got me wondering... What, if any, alternative schemes are available, either simple or complex? What are their benefits, drawbacks, and performance considerations?

    Read the article

  • ParticleSystem in Slick2d (with MarteEngine)

    - by Bro Kevin D.
    First of all, sorry if this sounds very newbie-ish. I'm stuck at making a ParticleSystem I made using Pedigree to work in my game. It's basically an explosion that I want to display whenever an enemy dies. The ParticleSystem has two emitters, smoke and explosion I tried putting it in my Enemy (extends Entity) class Enemy extends Entity class @Override public void update(GameContainer gc, int delta) throws SlickException { super.update(gc, delta); /** bunch of codes */ explosionSystem.update(delta); } @Override public void render(GameContainer gc, Graphics gfx) throws SlickException { super.render(gc, gfx); if(isDestroyed) { explosionSystem.render(x,y); if(explosionSystem.getEmitter(1).completed()) { this.destroy(); } } } And it does not render. I'm not sure if this is the proper way of implementing it, as I've considered creating an Entity to serve as controller for all the Enemies. Right now, I'm just adding enemies every second. So how do I render the ParticleSystem when the enemy dies? If anyone can point me to the right direction. Thank you for your time.

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • How to code a 4x shader/filter which emulates arcade crt display behavior?

    - by Arthur Wulf White
    I want to write a shader/filer probably in adobe Pixel Bender that will do the best job possible in emulating the fill of an oldskul monochromatic arcade CRT screen. Much like this here: http://filthypants.blogspot.com/2012/07/customizing-cgwgs-crt-pixel-shader.html Here are some attributes I know will exist in this filter: It will take in a low res image 160 x 120 and return a medium res image 640 x 480. It will add scanlines It will blur the color channels to create that color bleeding effect It will distort the shape of the image from a perfect rectangle into a rounder shape. The question is, could you please provide any other attributes that are beneficial to emulating an arcade CRT feel and links and resources on coding these effects. Thanks

    Read the article

  • When "W" is held, the character moves forward, but when "W" and "A" is held, movement completely stops

    - by Vlad1k
    I am making a 2D game, and when I hold the key "w", the player goes forward, but when I hold both "w" and "a", the movement stops completely, when I want it to go forward, while shifting to the left. Here is my script: var speed = 4; function Update() { // Make the character walk forward if "w" is being held if(Input.GetKey("w")) { rigidbody.velocity = transform.forward * speed; } // Stop the movement if "w" is not being held if(Input.GetKeyUp("w")) { rigidbody.velocity = transform.forward * 0; } // Make the character walk forward if "s" is being held if(Input.GetKey("s")) { rigidbody.velocity = transform.forward * -speed; } // Stop the movement if "s" is not being held if(Input.GetKeyUp("s")) { rigidbody.velocity = transform.forward * 0; } // Make the character walk left if "a" is being held if(Input.GetKey("a")) { rigidbody.velocity = transform.right * -speed; } // Stop the movement if "a" is not being held if(Input.GetKeyUp("a")) { rigidbody.velocity = transform.right * 0; } //Make the character walk right if "d" is being held if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } // Stop the movement if "d" is not being held if(Input.GetKeyUp("d")) { rigidbody.velocity = transform.right * 0; } } PLEASE MAKE THE CODE BETTER! I AM NEW! EDIT: Here is a video to show my problem. http://www.screenr.com/3oxH Here is the newest code: var speed = 4f; function Update() { if(Input.GetKey("w")) { rigidbody.velocity = transform.forward * speed; } else if(Input.GetKey("s")) { rigidbody.velocity = transform.forward * -speed; } else if(Input.GetKey("a")) { rigidbody.velocity = transform.right * -speed; } else if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } if(Input.GetKeyUp("w")) { rigidbody.velocity = transform.forward * 0; } if(Input.GetKeyUp("s")) { rigidbody.velocity = transform.forward * 0; } if(Input.GetKeyUp("a")) { rigidbody.velocity = transform.right * 0; } if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } if(Input.GetKeyUp("d")) { rigidbody.velocity = transform.right * 0; } }

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • How should I replan A*?

    - by Gregory Weir
    I've got a pathfinding boss enemy that seeks the player using the A* algorithm. It's a pretty complex environment, and I'm doing it in Flash, so the search can get a bit slow when it's searching over long distances. If the player was stationary, I could just search once, but at the moment I'm searching every frame. This takes long enough that my framerate is suffering. What's the usual solution to this? Is there a way to "replan" A* without redoing the entire search? Should I just search a little less often (every half-second or second) and accept that there will be a little inaccuracy in the path?

    Read the article

  • Index out of bounds, Java bukkit plugin

    - by Robby Duke
    I'm getting index out of bounds errors in my Bukkit plugin, and it's really beginning to piss me off... I for the life of me can't figure this issue out! Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 This is where I believe the code to be erroring... for(int i = 0; i <= staffOnline.size(); i++) { if(i == staffOnline.size()) { staffList = staffList + staffOnline.get(i); } else { staffList = staffList + staffOnline.get(i) + ", "; } }

    Read the article

  • 2D OBB collision detection, resolving collisions?

    - by Milo
    I currently use OBBs and I have a vehicle that is a rigid body and some buildings. Here is my update() private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); vehicle.update(16.6666f / 1000.0f); ArrayList<Building> buildings = city.getBuildings(); for(Building b : buildings) { if(vehicle.getRect().overlaps(b.getRect())) { vehicle.update(-17.0f / 1000.0f); break; } } } The collision detection works well. What doesn't is how they are dealt with. My goal is simple. If the vehicle hits a building, it should stop, and never go into the building. When I apply negative torque to reverse the car should not feel buggy and move away from the building. I don't want this to look buggy. This is my rigid body class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setCenter(c.x, c.y); forces = new Vector2D(0,0); //clear forces //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { Matrix mat = new Matrix(); float[] Vector2Ds = new float[2]; Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { Matrix mat = new Matrix(); float[] Vectors = new float[2]; Vectors[0] = world.x; Vectors[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vectors); return new Vector2D(Vectors[0], Vectors[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { Vector2D tangent = new Vector2D(-worldOffset.y, worldOffset.x); return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces = Vector2D.add(forces ,worldForce); //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } } Essentially, when any rigid body hits a building it should exhibit the same behavior. How is collision solving usually done? Thanks

    Read the article

  • How do I consistently re-size my game window and elements?

    - by Milo
    In my 2D game, I have a flow layout. Inside the flow layout are tables. I have a slider that lets the user make the tables larger or smaller. This makes the background larger or smaller too. Everything should scale proportionally which means the background should stay at the same position when I make things larger, and it almost does. When the scrollbar is at 0, it does exactly this. As the scrollbar gets further down problems arise. I'll toggle the slider maybe 3 times and on the fourth time, the background jumps a little lower on the Y axis. In order to be efficient, I only start rendering the background near the parent of the flow layout. Here it is: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { int cx, cy, cw, ch; g->getClippingRect(cx,cy,cw,ch); g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight()); float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; float offsetX = m_activeTables[0]->getLocation().getX() - w; float offsetY = m_activeTables[0]->getLocation().getY() - h; int startY = childRect.getY(); if(moo) { std::cout << "S=" << startY << ","; } int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } if(moo) { std::cout << "\n"; moo = false; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } The numeric problem seems to be in the value of startY. I outputted startY figuring out its value: As you can see here, this is me only zooming in, pay attention to the final number before the next s=. You'll notice that, what should happen is, the numbers should be linear, ex: -40, -38, -36, -34, -32, -30, etc. As you'll notice, the start numbers linearly correlate ex: 62k, 64k, 66k, 68k, 70k etc.. but the end result is wrong every third or 4th time. Here is most of the resize code: void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; agui::Gui* gotGui = getGui(); float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); if(gotGui) { gotGui->toggleWidgetLocationChanged(false); } updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); if((int)(newVal + 0.5f) > (int)newVal) { newVal++; } m_vScroll->setValue(newVal); static int x = 0; x++; moo = true; //std::cout << m_vScroll->getValue() << std::endl; if(gotGui) { gotGui->toggleWidgetLocationChanged(true); } if(gotGui) { gotGui->_widgetLocationChanged(); } } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { if(getGui()) { getGui()->toggleWidgetLocationChanged(false); } m_flow->setLocation(0,-val); if(getGui()) { getGui()->toggleWidgetLocationChanged(true); getGui()->_widgetLocationChanged(); } }

    Read the article

  • XNA Masking Mayhem

    - by TropicalFlesh
    I'd like to start by mentioning that I'm just an amateur programmer of the past 2 years with no formal training and know very little about maximizing the potential of graphics hardware. I can write shaders and manipulate a multi-layered drawing environment, but I've basically stuck to minimalist pixel shaders. I'm working on putting dynamic point light shadows in my 2d sidescroller, and have had it working to a reasonable degree. Just chucking it in without working on serious optimizations outside of basic culling, I can get 50 lights or so onscreen at once and still hover around 100 fps. The only issue is that I'm on a very high end machine and would like to target the game at as many platforms I can, low and high end. The way I'm doing shadows involves a lot of masking before I can finally draw the light to my light layer. Basically, my technique to achieveing such shadows is as follows. See pics in this album http://imgur.com/a/m2fWw#0 The dark gray represents the background tiles, the light gray represents the foreground tiles, and the yellow represents the shadow-emitting foreground tile. I'll draw the light using a radial gradient and a color of choice I'll then exclude light from the mask by drawing some geometry extending through the tile from my point light. I actually don't mask the light yet at this point, but I'm just illustrating the technique in this image Finally, I'll re-include the foreground layer in my mask, as I only want shadows to collect on the background layer and finally multiply the light with it's mask to the light layer My question is simple - How can I go about reducing the amount of render target switches I need to do to achieve the following: a. Draw mask to exclude shadows from the foreground to it's own target once per frame b. For each light that emits shadows, -Begin light mask as full white -Render shadow geometry as transparent with an opaque blendmode to eliminate shadowed areas from the mask -Render foreground mask back over the light mask to reintroduce light to the foreground c. Multiply light texture with it's individual mask to the main light layer.

    Read the article

  • Why are my Unity procedural animations jerky?

    - by Phoenix Perry
    I'm working in Unity and getting some crazy weird motion behavior. I have a plane and I'm moving it. It's ever so slightly getting about 1 pixel bigger and smaller. It looks like the it's kind of getting squeezed sideways by a pixel. I'm moving a plane by cos and sin so it will spin on the x and z axes. If the planes are moving at Time.time, everything is fine. However, if I put in slower speed multiplier, I get an amazingly weird jerk in my animation. I get it with or without the lerp. How do I fix it? I want it to move very slowly. Is there some sort of invisible grid in unity? Some sort of minimum motion per frame? I put a visual sample of the behavior here. Here's the relevant code: public void spin() { for (int i = 0; i < numPlanes; i++ ) { GameObject g = planes[i] as GameObject; //alt method //currentRotation += speed * Time.deltaTime * 100; //rotation.eulerAngles = new Vector3(0, currentRotation, 0); //g.transform.position = rotation * rotationRadius; //sine method g.GetComponent<PlaneSetup>().pos.x = g.GetComponent<PlaneSetup>().radiusX * (Mathf.Cos((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle)); g.GetComponent<PlaneSetup>().pos.z = g.GetComponent<PlaneSetup>().radius * Mathf.Sin((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle); g.GetComponent<PlaneSetup>().pos.y = g.GetComponent<Transform>().position.y; ////offset g.GetComponent<PlaneSetup>().pos.z += 20; g.GetComponent<PlaneSetup>().posLerp.x = Mathf.Lerp(g.transform.position.x,g.GetComponent<PlaneSetup>().pos.x, .5f); g.GetComponent<PlaneSetup>().posLerp.z = Mathf.Lerp(g.transform.position.z, g.GetComponent<PlaneSetup>().pos.z, .5f); g.GetComponent<PlaneSetup>().posLerp.y = g.GetComponent<Transform>().position.y; g.transform.position = g.GetComponent<PlaneSetup>().posLerp; } Invoke("spin",0.0f); } The full code is on github. There is literally nothing else going on. I've turned off all other game objects so it's only the 40 planes with a texture2D shader. I removed it from Invoke and tried it in Update -- still happens. With a set frame rate or not, the same problem occurs. Tested it in Fixed Update. Same issue. The script on the individual plane doesn't even have an update function in it. The data on it could functionally live in a struct. I'm getting between 90 and 123 fps. Going to investigate and test further. I put this in an invoke function to see if I could get around it just occurring in update. There are no physics on these shapes. It's a straight procedural animation. Limited it to 1 plane - still happens. Thoughts? Removed the shader - still happening.

    Read the article

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Stats on Screen Size for Flash Games

    - by ashes999
    I'm working on a Flash game after many, many years. I'm trying to figure out size to make my application run (eg. 600x800). Because it's a tall (not wide) game, I'm confused. I know about (and love) the Steam hardware stats. However, for Flash gaming, I have two nit-picks with their survey sample: 1) Caters to more hardcore gamers with better hardware (overall) 2) Captures only a subset of Flash gamers. Doesn't capture people who play at school, work, etc. or not netbooks and lighter machines. Are there any sort of statistics I can use to determine which size to use? Ideally, I would like to know something like: 800x600 will fit 94% of users screens 1024x768 will fit 74% of users screens 1200x960 will fit 53% of users screens etc.

    Read the article

  • How do I move the camera sideways in Libgdx?

    - by Bubblewrap
    I want to move the camera sideways (strafe). I had the following in mind, but it doesn't look like there are standard methods to achieve this in Libgdx. If I want to move the camera sideways by x, I think I need to do the following: Create a Matrix4 mat Determine the orthogonal vector v between camera.direction and camera.up Translate mat by v*x Multiply camera.position by mat Will this approach do what I think it does, and is it a good way to do it? And how can I do this in libgdx? I get "stuck" at step 2, as I have not found any standard method in Libgdx to calculate an orthogonal vector. EDIT: I think I can use camera.direction.crs(camera.up) to find v. I'll try this approach tonight and see if it works. EDIT2: I got it working and didn't need the matrix after all: Vector3 right = camera.direction.cpy().crs(camera.up).nor(); camera.position.add(right.mul(x));

    Read the article

  • RenderTarget2D behavior in XNA

    - by Utkarsh Sinha
    I've been dabbling with XNA for a couple of days now. This chunk of code doesn't work as I expect. The goal is to render sprites individually and composite them on another rendertarget. P = RenderTarget2D(with RenderTargetUsage.PreserveContents) D = RenderTarget2D(with RenderTargetUsage.DiscardContents) for all sprites: graphicsDevice.SetRenderTarget(D); <draw sprite i> graphicsDevice.SetRenderTarget(P); <Draw D> graphicsDevice.SetRenderTarget(null); <Draw P> The result I get is - only the last sprite is visible. I'm sure I'm missing some piece of information about RenderTarget2D. Any hints on what that might be? Cross posted from - http://stackoverflow.com/questions/9970349/weird-rendertarget2d-behaviour

    Read the article

  • Create edges in Blender

    - by Mikey
    I've worked with 3DS Max in Uni and am trying to learn Blender. My problem is I know a lot of simple techniques from 3DS max that I'm having trouble translating into Blender. So my question is: Say I have a poly in the middle of a mesh and I want to split it in two. Simply adding an edge between two edges. This would cause a two 5gons either side. It's a simple technique I use every now and then when I want to modify geometry. It's called "Edge connect" in 3DS Max. In Blender the only edge connect method I can find is to create edge loops, not helpful when aiming at low poly iPhone games. Is there an equivalent in blender?

    Read the article

  • How does this game loop actually work?

    - by Nicolai
    I read this playfulJS post, about ray-casting: http://www.playfuljs.com/a-first-person-engine-in-265-lines/ It looks really interested, so I decided to look at his javascript. I am no expert in javascript, so I quickly got lost. It's the game loop "object" that really gets me. I simply don't understand how it works. From the code: function GameLoop() { this.frame = this.frame.bind(this); this.lastTime = 0; this.callback = function() {}; } GameLoop.prototype.start = function(callback) { this.callback = callback; requestAnimationFrame(this.frame); }; GameLoop.prototype.frame = function(time) { var seconds = (time - this.lastTime) / 1000; this.lastTime = time; if (seconds < 0.2) this.callback(seconds); requestAnimationFrame(this.frame); }; var loop = new GameLoop(); loop.start(function frame(seconds) { map.update(seconds); player.update(controls.states, map, seconds); camera.render(player, map); }); Now, what really confuses me here, is this bind stuff and how this actually loops. I am guessing, that if less than 0.2 seconds have passed, since the last time the loop was run, it simply goes back to re-check the time. If more than 0.2 seconds have passed, it leaves the frame function, and executes the 3 lines in the loop. But, if this is true, then how does the loop.start() get called again? And what on earth is the meaning of this.frame = this.frame.bind(this);? I've looked up prototypes bind() but I really don't understand it.

    Read the article

  • Best memory allocation strategy for iOS ?

    - by Mr.Gando
    Hey guys, I'm debating myself about memory allocation on iOS. I write most of my code in C++ and I really like using ObjectPools, FreeLists, etc. In order to pre-allocate a lot of the stuff that I'll be constantly "alloc/dealloc" during the course of my game, ( like particles, game entities, etc ). Still on iOS, it's not like we are developing for a console like PSP, where I can know for fact that I'll get a fixed amount of memory. iOS , will issue "memory warnings" when the system needs memory. Does anyone have some suggestions about this ? Is it too serious since the new iPod touch/iPhone 4 are carrying more RAM ? or it's still a big concern ? Thanks!

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >