Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 293/657 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Rotation of bitmap using a frame by frame animation

    - by pengume
    Hey every one I know this has probably been asked a ton of times but I just wanted to clarify if I am approaching this correctly, since I ran into some problems rotating a bitmap. So basically I have one large bitmap that has four frames drawn on it and I only draw one at a time by looping through the bitmap by increments to animate walking. I can get the bitmap to rotate correctly when it is not moving but once the animation starts it starts to cut off alot of the image and sometimes becomes very fuzzy. I have tried public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); setRotation = false; } And then the Draw Method here // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null); The animation is four frames long and gets incremented by 66 (the size of one of the frames on the bitmap) for every frame and then back to 0 at the end of the loop.

    Read the article

  • How to get GameElements (RigidBody) size in Unity

    - by Shivan Dragon
    I've made a prefab consisting of a Cube which I've first scaled to more resemble a brick. There's also a Rigidbody added to the cube (in the prefab). Now I want to use that prefab in a c# script to make a wall out of multiple bricks. My question is, how can I access the dimensions of my brick (width, height, the z dimension size) so that in my script I can make bricks which are placed one next to the other (and then one on top of the other)? I've looked at the documentation for GameObject and Rigidbody but I can't find anything helpful. Just for refference, my script so far is: public GameObject brick; void Start () { Instantiate(this.brick, new Vector3(0.01326297f, -30.07855f, 100f), Quaternion.identity); // int brickWidth = this.brick.????; }

    Read the article

  • Finding the contact point with SAT

    - by Kai
    The Separating Axis Theorem (SAT) makes it simple to determine the Minimum Translation Vector, i.e., the shortest vector that can separate two colliding objects. However, what I need is the vector that separates the objects along the vector that the penetrating object is moving (i.e. the contact point). I drew a picture to help clarify. There is one box, moving from the before to the after position. In its after position, it intersects the grey polygon. SAT can easily return the MTV, which is the red vector. I am looking to calculate the blue vector. My current solution performs a binary search between the before and after positions until the length of the blue vector is known to a certain threshold. It works but it's a very expensive calculation since the collision between shapes needs to be recalculated every loop. Is there a simpler and/or more efficient way to find the contact point vector?

    Read the article

  • Extract derived 3D scaling from a 3D Sprite to set to a 2D billboard

    - by Bill Kotsias
    I am trying to get the derived position and scaling of a 3D Sprite and set them to a 2D Sprite. I have managed to do the first part like this: var p:Point = sprite3d.local3DToGlobal(new Vector3D(0,0,0)); billboard.x = p.x; billboard.y = p.y; But I can't get the scaling part correctly. I am trying this: var mat:Matrix3D = sprite3d.transform.getRelativeMatrix3D(stage); // get derived matrix(?) var scaleV:Vector3D = mat.decompose()[2]; // get scaling vector from derived matrix var scale:Number = scaleV.length; billboard.scaleX = scale; billboard.scaleY = scale; ...but the result is apparently wrong. PS. One might ask what I am trying to achieve. I am trying to create "billboard" 3D sprites, i.e. sprites which are affected by all 3D transformations except rotations, thus they always face the "camera".

    Read the article

  • extrapolating object state based on updates

    - by user494461
    I have a networked multi-user collaborative application. To maintain a consistent virtual world, I send updates for objects from a master peer to a guest peer. The update state contains x,y,z coordinates of object center and his rotation matrix(CHAI3d api used a 3x3 matrix) with 30Hz frequency. I want to reduce this update rate and want to send with a reduced update rate. I want a predictor on both peers. When the predicted value is outside, say a error value of 10% in comparison to master peers objects original state the master peer triggers a state update. Now for position I used velocity,position updates so that the guest peer can extrapolate position. Like velocity for position what parameter should I use for rotation extrapolition?

    Read the article

  • Geometry shader for multiple primitives

    - by Byte56
    How can I create a geometry shader that can handle multiple primitives? For example when creating a geometry shader for triangles, I define a layout like so: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; But if I use this shader then lines or points won't show up. So adding: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; layout(lines) in; layout(line_strip, max_vertices=2) out; The shader will compile and run, but will only render lines (or whatever the last primitive defined is). So how do I define a single geometry shader that will handle multiple types of primitives? Or is that not possible and I need to create multiple shader programs and change shader programs before drawing each type?

    Read the article

  • How do I identify mouse clicks versus mouse down in games?

    - by Tristan
    What is the most common way of handling mouse clicks in games? Given that all you have in way of detecting input from the mouse is whether a button is up or down. I currently just rely on the mouse down being a click, but this simple approach limits me greatly in what I want to achieve. For example I have some code that should only be run once on a mouse click, but using mouse down as a mouse click can cause the code to run more then once depending on how long the button is held down for. So I need to do it on a click! But what is the best way to handle a click? Is a click when the mouse goes from mouse up to down or from down to up or is it a click if the button was down for less then x frames/milliseconds and then if so, is it considered mouse down and a click if its down for x frames/milliseconds or a click then mouse down? I can see that each of the approaches can have their uses but which is the most common in games? And maybe i'll ask more specifically which is the most common in RTS games?

    Read the article

  • Off center projection

    - by N0xus
    I'm trying to implement the code that was freely given by a very kind developer at the following link: http://forum.unity3d.com/threads/142383-Code-sample-Off-Center-Projection-Code-for-VR-CAVE-or-just-for-fun Right now, all I'm trying to do is bring it in on one camera, but I have a few issues. My class, looks as follows: using UnityEngine; using System.Collections; public class PerspectiveOffCenter : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } public static Matrix4x4 GeneralizedPerspectiveProjection(Vector3 pa, Vector3 pb, Vector3 pc, Vector3 pe, float near, float far) { Vector3 va, vb, vc; Vector3 vr, vu, vn; float left, right, bottom, top, eyedistance; Matrix4x4 transformMatrix; Matrix4x4 projectionM; Matrix4x4 eyeTranslateM; Matrix4x4 finalProjection; ///Calculate the orthonormal for the screen (the screen coordinate system vr = pb - pa; vr.Normalize(); vu = pc - pa; vu.Normalize(); vn = Vector3.Cross(vr, vu); vn.Normalize(); //Calculate the vector from eye (pe) to screen corners (pa, pb, pc) va = pa-pe; vb = pb-pe; vc = pc-pe; //Get the distance;; from the eye to the screen plane eyedistance = -(Vector3.Dot(va, vn)); //Get the varaibles for the off center projection left = (Vector3.Dot(vr, va)*near)/eyedistance; right = (Vector3.Dot(vr, vb)*near)/eyedistance; bottom = (Vector3.Dot(vu, va)*near)/eyedistance; top = (Vector3.Dot(vu, vc)*near)/eyedistance; //Get this projection projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); //Fill in the transform matrix transformMatrix = new Matrix4x4(); transformMatrix[0, 0] = vr.x; transformMatrix[0, 1] = vr.y; transformMatrix[0, 2] = vr.z; transformMatrix[0, 3] = 0; transformMatrix[1, 0] = vu.x; transformMatrix[1, 1] = vu.y; transformMatrix[1, 2] = vu.z; transformMatrix[1, 3] = 0; transformMatrix[2, 0] = vn.x; transformMatrix[2, 1] = vn.y; transformMatrix[2, 2] = vn.z; transformMatrix[2, 3] = 0; transformMatrix[3, 0] = 0; transformMatrix[3, 1] = 0; transformMatrix[3, 2] = 0; transformMatrix[3, 3] = 1; //Now for the eye transform eyeTranslateM = new Matrix4x4(); eyeTranslateM[0, 0] = 1; eyeTranslateM[0, 1] = 0; eyeTranslateM[0, 2] = 0; eyeTranslateM[0, 3] = -pe.x; eyeTranslateM[1, 0] = 0; eyeTranslateM[1, 1] = 1; eyeTranslateM[1, 2] = 0; eyeTranslateM[1, 3] = -pe.y; eyeTranslateM[2, 0] = 0; eyeTranslateM[2, 1] = 0; eyeTranslateM[2, 2] = 1; eyeTranslateM[2, 3] = -pe.z; eyeTranslateM[3, 0] = 0; eyeTranslateM[3, 1] = 0; eyeTranslateM[3, 2] = 0; eyeTranslateM[3, 3] = 1f; //Multiply all together finalProjection = new Matrix4x4(); finalProjection = Matrix4x4.identity * projectionM*transformMatrix*eyeTranslateM; //finally return return finalProjection; } // Update is called once per frame public void FixedUpdate () { Camera cam = camera; //calculate projection Matrix4x4 genProjection = GeneralizedPerspectiveProjection( new Vector3(0,1,0), new Vector3(1,1,0), new Vector3(0,0,0), new Vector3(0,0,0), cam.nearClipPlane, cam.farClipPlane); //(BottomLeftCorner, BottomRightCorner, TopLeftCorner, trackerPosition, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = genProjection; } } My error lies in projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); The debugger states: Expression denotes a `type', where a 'variable', 'value' or 'method group' was expected. Thus, I changed the line to read: projectionM = new PerspectiveOffCenter(left, right, bottom, top, near, far); But then the error is changed to: The type 'PerspectiveOffCenter' does not contain a constructor that takes '6' arguments. For reasons that are obvious. So, finally, I changed the line to read: projectionM = new GeneralizedPerspectiveProjection(left, right, bottom, top, near, far); And the error I get is: is a 'method' but a 'type' was expected. With this last error, I'm not sure what it is I should do / missing. Can anyone see what it is that I'm missing to fix this error?

    Read the article

  • Flash framerate reliability

    - by Tim Cooper
    I am working in Flash and a few things have been brought to my attention. Below is some code I have some questions on: addEventListener(Event.ENTER_FRAME, function(e:Event):void { if (KEY_RIGHT) { // Move character right } // Etc. }); stage.addEventListener(KeyboardEvent.KEY_DOWN, function(e:KeyboardEvent):void { // Report key which is down }); stage.addEventListener(KeyboardEvent.KEY_UP, function(e:KeyboardEvent):void { // Report key which is up }); I have the project configured so that it has a framerate of 60 FPS. The two questions I have on this are: What happens when it is unable to call that function every 1/60 of a second? Is this a way of processing events that need to be limited by time (ex: a ball which needs to travel to the right of the screen from the left in X seconds)? Or should it be done a different way?

    Read the article

  • Rotate sprite to face 3D camera

    - by omikun
    I am trying to rotate a sprite so it is always facing a 3D camera. shaders->setUniform("camera", gCamera.matrix()); glm::mat4 scale = glm::scale(glm::mat4(), glm::vec3(5e5, 5e5, 5e5)); glm::vec3 look = gCamera.position(); glm::vec3 right = glm::cross(gCamera.up(), look); glm::vec3 up = glm::cross(look, right); glm::mat4 newTransform = glm::lookAt(glm::vec3(0), gCamera.position(), up) * scale; shaders->setUniform("model", newTransform); In the vertex shader: gl_Position = camera * model * vec4(vert, 1); The object will track the camera if I move the camera up or down, but if I rotate the camera around it, it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. What am I doing wrong?

    Read the article

  • SlimDX and Parsing .X Files

    - by P. Avery
    I'm trying to parse a .x file using SlimDX. I can create the XFile object and register templates but I'm having problems with the enumeration object. The enumeration object has a child count of 0 for a file I know to have valid data. Here is code to create file, enumeration, and data objects: public void Parse(string filename, string templates, ref Frame aParam) { XFile xfile = null; XFileEnumerationObject enumObj = null; XFileData dataObj = null; // create file object xfile = new XFile(); // register templates if (xfile.RegisterTemplates(XFile.DefaultTemplates).IsFailure) { Console.WriteLine(Result.Last); xfile.Dispose(); return; } // create enumeration object enumObj = xfile.CreateEnumerationObject(filename, System.Runtime.InteropServices.CharSet.Auto); if (enumObj == null) { xfile.Dispose(); return; } // get child count( returns 0 here ) long ncElements = enumObj.ChildCount; for (int i = 0; i < ncElements; ++i) { // never reached... dataObj = enumObj.GetChild(i); if (dataObj.IsReference) continue; try { Parse(dataObj, ref aParam); } catch (Exception e) { e.Write(); } finally { dataObj.Dispose(); } } enumObj.Dispose(); xfile.Dispose(); } ...There are no exceptions thrown by this function...the child count is 0 so the conditional loop breaks right away, the file objects are disposed of and the function returns... Here is .x file...a simple cube: xof 0303txt 0032 Frame Root { FrameTransformMatrix { 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000;; } Frame Cube { FrameTransformMatrix { 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 0.000000, 1.000000;; } Mesh Cube{ //Cube Mesh 36; -1.000000; 1.000000; 1.000000;, -1.000000;-1.000000; 1.000000;, 0.999999;-1.000001; 1.000000;, -1.000000;-1.000000;-1.000000;, 1.000000;-1.000000;-1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000; 0.999999; 1.000000;, -1.000000; 1.000000; 1.000000;, 0.999999;-1.000001; 1.000000;, -1.000000; 1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000; 1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 0.999999; 1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000; 0.999999; 1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000;-1.000000; 1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;, 1.000000; 1.000000;-1.000000;, 1.000000; 0.999999; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;, -1.000000; 1.000000; 1.000000;, 1.000000; 1.000000;-1.000000;, -1.000000;-1.000000; 1.000000;, -1.000000;-1.000000;-1.000000;, 0.999999;-1.000001; 1.000000;, 1.000000;-1.000000;-1.000000;, -1.000000;-1.000000;-1.000000;, -1.000000; 1.000000;-1.000000;; 12; 3;0;1;2;, 3;3;4;5;, 3;6;7;8;, 3;9;10;11;, 3;12;13;14;, 3;15;16;17;, 3;18;19;20;, 3;21;22;23;, 3;24;25;26;, 3;27;28;29;, 3;30;31;32;, 3;33;34;35;; MeshNormals { //Mesh Normals 36; 0.000000;-0.000000; 1.000000;, 0.000000;-0.000000; 1.000000;, 0.000000;-0.000000; 1.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-1.000000;-0.000000;, -0.000000;-0.000000; 1.000000;, -0.000000;-0.000000; 1.000000;, -0.000000;-0.000000; 1.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 1.000000;-0.000001; 0.000000;, 1.000000;-0.000001; 0.000000;, 1.000000;-0.000001; 0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, -1.000000; 0.000000;-0.000000;, 0.000000; 0.000000;-1.000000;, 0.000000; 0.000000;-1.000000;, 0.000000; 0.000000;-1.000000;, 1.000000; 0.000000;-0.000000;, 1.000000; 0.000000;-0.000000;, 1.000000; 0.000000;-0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, 0.000000; 1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, -0.000000;-1.000000; 0.000000;, 0.000000;-0.000000;-1.000000;, 0.000000;-0.000000;-1.000000;, 0.000000;-0.000000;-1.000000;; 12; 3;0;1;2;, 3;3;4;5;, 3;6;7;8;, 3;9;10;11;, 3;12;13;14;, 3;15;16;17;, 3;18;19;20;, 3;21;22;23;, 3;24;25;26;, 3;27;28;29;, 3;30;31;32;, 3;33;34;35;; } //End of Mesh Normals MeshMaterialList { //Mesh Material List 1; 12; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;; Material Material { 0.640000; 0.640000; 0.640000; 1.000000;; 96.078431; 0.500000; 0.500000; 0.500000;; 0.000000; 0.000000; 0.000000;; TextureFilename {"Yellow.jpg";} } } //End of Mesh Material List MeshTextureCoords UVMap{ //Mesh UV Coordinates 36; 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 0.000000;, 0.000000; 1.000000;, 1.000000; 1.000000;, 1.000000; 0.000000;; } //End of Mesh UV Coordinates } //End of Mesh Mesh } //End of Cube } //End of Root Frame

    Read the article

  • Phone crash when try to use vibration on Android

    - by Diego Unanue
    Im developing an app that when you click a button the phone has to vibrate, the issue is that the phone just chashes. Saing that I need permitions to vibrate. I've already set this permition in the build.setting (android manifiest). Here is the code build.settings: settings = { orientation = { default = "portrait", supported = { "portrait", } }, iphone = { plist= { CoronaUseIOS7LandscapeOnlyWorkaround = true, CoronaUseIOS7IPadPhotoPickerLandscapeOnlyWorkaround = true, CoronaUseIOS6LandscapeOnlyWorkaround = true, CoronaUseIOS6IPadPhotoPickerLandscapeOnlyWorkaround = true, UIApplicationExitsOnSuspend = false, UIPrerenderedIcon = true, UIStatusBarHidden = false, CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-60.png", "[email protected]", "Icon-72.png", "[email protected]", "Icon-76.png", "[email protected]", "Icon-Small.png", "[email protected]", "Icon-Small-40.png", "[email protected]", "Icon-Small-50.png", "[email protected]", }, }, }, android = { permissions = { { name = ".permission.C2D_MESSAGE", protectionLevel = "signature" }, }, usesPermissions = { "android.permission.INTERNET", "android.permission.VIBRATE", }, }, } the file that uses the vibration is: local onButtonEvent = function (event ) system.vibrate() end I read all post in Corona page without success. Can I see the android manifest to see if the permissions are there. I've read that is a Corona issue not sure.

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • Rotating a view of a chunked 2d tilemap

    - by Danie Clawson
    I'm working on a top-down (oblique) tile-based engine. I would like for the tiles to have a definable height in the world, with Characters being occluded by them, etc. This has led to a desire to be able to "rotate" the view of the world, even though I'm using all hand-drawn graphics and blitting. Therefor, I need to rotate the actual world itself, or change how the Camera traverses these arrays. How can, or should, I create individual rotations of 90 degrees, when I have multi-dimensional arrays? Is it faster to actually rotate the array, to access it differently, or to create pre-computed accessor(?) arrays, something like how my chunks work? How can I rotate an individual chunk, or set of chunks? Currently I establish my tile grid like this (tile height not included): function Surface(WIDTH, HEIGHT) { WIDTH = Math.max(WIDTH-(WIDTH%TPC), TPC); HEIGHT = Math.max(HEIGHT-(HEIGHT%TPC), TPC); this.tiles = []; this.chunks = []; //Establish tiles for(var x = 0; x < WIDTH; x++) { var col = [], ch_x = Math.floor(x/TPC); if(!this.chunks[ch_x]) this.chunks.push([]); for(var y = 0; y < HEIGHT; y++) { var tile = new Tile(x, y), ch_y = Math.floor(y/TPC); if(!this.chunks[ch_x][ch_y]) this.chunks[ch_x].push([]); this.chunks[ch_x][ch_y].push(tile); col.push(tile); } this.tiles.push(col); } }; Even some basic advice on my data struct would be much appreciated.

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • Alpha blend 3D png texture in XNA

    - by ProgrammerAtWork
    I'm trying to draw a partly transparent texture a plane, but the problem is that it's incorrectly displaying what is behind that texture. Pseudo code: vertices1 basiceffect1 // The vertices of vertices1 are located BEHIND vertices2 vertices2 basiceffect2 // The vertices of vertices2 are located IN FRONT vertices1 GraphicsDevice.Clear(Blue); PrimitiveBatch.Begin(); //if I draw like this: PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) //Everything gets draw correctly, I can see the texture of vertices2 trough //the transparent parts of vertices1 //but if I draw like this: PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) //I cannot see the texture of vertices1 in behind the texture of vertices2 //Instead, the texture vertices2 gets drawn, and the transparent parts are blue //The clear color PrimitiveBatch.Draw(vertice PrimitiveBatch.End(); My question is, Why does the order in which I call draw matter?

    Read the article

  • Car engine sound simulation

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • Bad FPS for smaller size (OpenGL ES with SDL)

    - by ber4444
    If you saw my other question, well, there is still a little problem: Click here to watch on youtube Basically, the frame rate is very bad on the actual device, where for some reason the animation is scaled (it looks like the left side on the video). It is quite fast on the simulator where it is not scaled (right side). For a test, I submitted this new changeset that hard-codes the smaller size (plus increases the point size for HII regions to make the dust clouds more visible), and as you see in the video, now it is slow even in the simulator (left side shows the small size, right side shows the original size -- otherwise the code is the same). I'm clueless why it's soooo slow with a smaller galaxy, in fact it should be FASTER. As for general speed optimization (which is not strictly part of my question but is closely related to it, esp. if we need a workaround to speed things up), some initial ideas: reducing the number of items drawn may affect the appearance negatively but screen resolution could be reduced there are too many glBegin(GL_POINTS)/glEnd() blocks, we could draw more than just a single star at once

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • Changing Palette for Day/Light Mode using GIMP

    - by J.C.
    Hello, Suppose I've a picture, which want to achieve day/light mode by changing 8bpp color palette. If I want the pixel index of my picture is always fixed for both day mode and night mode. For example, the 1st pixel index is 100. Which I can look up index 100 in day mode palette and night mode palette. How can I use GIMP to do so? My goal is to not update my pixel index of my picture. Also, as you see in two palette, they are not one one mapping. That is index 1 of the day mode palette and index 1 of the night mode palette may not used in the same pixel of the picture, how can I tackle this problem? Actually, my use case is as follow I want to use one 8bpp picture to achieve day/night mode by update only the color palette (without updating the pixel index). The advantage is I only have to prepare 2 256 byte palette rather than saving 2 big pictures in my limited data ram. Thanks a lot

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >