Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 383/1016 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How do I calculate the boundary of the game window after transforming the view?

    - by Cypher
    My Camera class handles zoom, rotation, and of course panning. It's invoked through SpriteBatch.Begin, like so many other XNA 2D camera classes. It calculates the view Matrix like so: public Matrix GetViewMatrix() { return Matrix.Identity * Matrix.CreateTranslation(new Vector3(-this.Spatial.Position, 0.0f)) * Matrix.CreateTranslation(-( this.viewport.Width / 2 ), -( this.viewport.Height / 2 ), 0.0f) * Matrix.CreateRotationZ(this.Rotation) * Matrix.CreateScale(this.Scale, this.Scale, 1.0f) * Matrix.CreateTranslation(this.viewport.Width * 0.5f, this.viewport.Height * 0.5f, 0.0f); } I was having a minor issue with performance, which after doing some profiling, led me to apply a culling feature to my rendering system. It used to, before I implemented the camera's zoom feature, simply grab the camera's boundaries and cull any game objects that did not intersect with the camera. However, after giving the camera the ability to zoom, that no longer works. The reason why is visible in the screenshot below. The navy blue rectangle represents the camera's boundaries when zoomed out all the way (Camera.Scale = 0.5f). So, when zoomed out, game objects are culled before they reach the boundaries of the window. The camera's width and height are determined by the Viewport properties of the same name (maybe this is my mistake? I wasn't expecting the camera to "resize" like this). What I'm trying to calculate is a Rectangle that defines the boundaries of the screen, as indicated by my awesome blue arrows, even after the camera is rotated, scaled, or panned. Here is how I've more recently found out how not to do it: public Rectangle CullingRegion { get { Rectangle region = Rectangle.Empty; Vector2 size = this.Spatial.Size; size *= 1 / this.Scale; Vector2 position = this.Spatial.Position; position = Vector2.Transform(position, this.Inverse); region.X = (int)position.X; region.Y = (int)position.Y; region.Width = (int)size.X; region.Height = (int)size.Y; return region; } } It seems to calculate the right size, but when I render this region, it moves around which will obviously cause problems. It needs to be "static", so to speak. It's also obscenely slow, which causes more of a problem than it solves. What am I missing?

    Read the article

  • What are the cons of using DrawableGameComponent for every instance of a game object?

    - by Kensai
    I've read in many places that DrawableGameComponents should be saved for things like "levels" or some kind of managers instead of using them, for example, for characters or tiles (Like this guy says here). But I don't understand why this is so. I read this post and it made a lot of sense to me, but these are the minority. I usually wouldn't pay too much attention to things like these, but in this case I would like to know why the apparent majority believes this is not the way to go. Maybe I'm missing something.

    Read the article

  • Why do the order of uniforms gets changed by the compiler?

    - by Aybe
    I have the following shader, everything works fine when setting the value of one of the matrices but I've discovered that getting a value back is incorrect for View and Projection, they are in reverse order. #version 430 precision highp float; layout (location = 0) uniform mat4 Model; layout (location = 1) uniform mat4 View; layout (location = 2) uniform mat4 Projection; layout (location = 0) in vec3 in_position; layout (location = 1) in vec4 in_color; out vec4 out_color; void main(void) { gl_Position = Projection * View * Model * vec4(in_position, 1.0); out_color = in_color; } When querying their location they are effectively reversed, I did a small test by renaming View to Piew which puts it before Projection if sorted alphabetically and the order is correct. Now if I do remove layout (location = ...) from the uniforms, the problem disappears !? I am starting to think that this is a driver bug as explained in the wiki. Do you know why the order of the uniforms is changed whenever the shader is compiled ? (using an AMD HD7850)

    Read the article

  • Maintain proper symbol order when applying an armature in flash

    - by Michael Taufen
    I am trying to animate a character's leg in flash CS 5.5 for a game I am working on. I decided to use the bone tool because it's awesome. The problem I am having, however, is that for my character to be animated properly, the symbols that make up his leg (upper leg, lower leg, and shoe) need to be on top of each other in a specific way (otherwise the shoe looks like its next to the leg, etc). Applying the bones results in the following problem: the first symbol I apply it to is placed in the rear on the armature layer, the next on top of it, and so on, until the final symbol is already on top. I need them to be in the opposite order, but arrange send to back does nothing on the armature layer. How can I fix this? tl;dr: The bone tool is not maintaining the stacking order of my objects, please help. Thanks for helping :).

    Read the article

  • Setting up collision using a tilemap and cocos2d

    - by James
    I'm building my first platformer using cocos2d and a tilemap. I'm having trouble coming up with a decent way of determining if the character is colliding with an object. More specifically, in which direction is the character colliding with an object. Following the tutorial here, I have made a separate "meta" layer of collidable tiles. The problem is that unless the character is in the tile, you can't detect the collision. Also, there's no way of telling WHERE the collision is occurring. The best solution would be one that could tell me if a character is up against a wall, or walking on top of a platform. However, I can't seem to figure out a good technique for this.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites?

    - by Alex Ames
    I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this: either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.

    Read the article

  • how to do partial updates in OpenGL?

    - by Will
    It is general wisdom that you redraw the entire viewport on each frame. I would like to use partial updates; what are the various ways can do that, and what are their pros, cons and relative performance? (Using textures, FBOs, the accumulator buffer, any kind of scissors that can affect swapbuffers etc?) A scenario: a scene with a fair few thousand visible trees; although the textures are mipmapped and they are drawn via VBOs roughly front-to-back with so on, its still a lot of polys. Would streaming a single screen-sized texture be better than throwing them at the screen every frame? You'd have to redraw and recapture them only on camera movement or as often as your wind model updates or whatever, which need not be every frame.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Raycasting mouse coordinates to rotated object?

    - by SPL
    I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated -100 on the Y and rotated 60 on the X then translated again -100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.

    Read the article

  • isometric drawing order with larger than single tile images - drawing order algorithm?

    - by Roger Smith
    I have an isometric map over which I place various images. Most images will fit over a single tile, but some images are slightly larger. For example, I have a bed of size 2x3 tiles. This creates a problem when drawing my objects to the screen as I get some tiles erroneously overlapping other tiles. The two solutions that I know of are either splitting the image into 1x1 tile segments or implementing my own draw order algorithm, for example by assigning each image a number. The image with number 1 is drawn first, then 2, 3 etc. Does anyone have advice on what I should do? It seems to me like splitting an isometric image is very non obvious. How do you decide which parts of the image are 'in' a particular tile? I can't afford to split up all of my images manually either. The draw order algorithm seems like a nicer choice but I am not sure if it's going to be easy to implement. I can't solve, in my head, how to deal with situations whereby you change the index of one image, which causes a knock on effect to many other images. If anyone has an resources/tutorials on this I would be most grateful.

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • Box2D Bicycle Wheels Motor Problem - Flash 2.1a

    - by Craig
    I have made a bicycle with Box2D using several polygons for the frame at different angles connected using weld joints, and I have revolute joints on the wheels with a motor. I have made some basic terrain (straight ground and a small ramp) and added keyboard input to control the bicycle with torque to balance it. All of this is done in with Box2D's Debug Draw. When the bicycle is on its back wheel but diagonally forward (kinda like this position - /) the motors just cause it go spinning backwards over when in reality it should either stay on its back wheel or go down onto both wheels. Here's my code the revolute joints: //Front Wheel Joint var frontWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); frontWheelJointDef.Initialize(frontWheelBody, secondFrameBody, frontWheelBody.GetWorldCenter()); frontWheelJointDef.enableMotor=true; frontWheelJointDef.maxMotorTorque=10000; frontWheelJoint = _world.CreateJoint(frontWheelJointDef) as b2RevoluteJoint; //Rear Wheel Joint var rearWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); rearWheelJointDef.Initialize(rearWheelBody, firstFrameBody, rearWheelBody.GetWorldCenter()); rearWheelJointDef.enableMotor=true; rearWheelJointDef.maxMotorTorque=10000; rearWheelJoint = _world.CreateJoint(rearWheelJointDef) as b2RevoluteJoint; And here's the relevant part of my update function: // up and down control wheels motor if (up) { motorSpeed-=0.5; } if (down) { motorSpeed += 0.5; } // left and right control cart torque if (left) { middleCentreFrameBody.ApplyTorque( -3); gearBody.ApplyTorque( -3); firstFrameBody.ApplyTorque( -3); secondFrameBody.ApplyTorque( -3); rearWheelToChainBody.ApplyTorque( -3); chainToFrontFrameBody.ApplyTorque( -3); topMiddleFrameBody.ApplyTorque( -3); } if (right) { middleCentreFrameBody.ApplyTorque( 3); gearBody.ApplyTorque( 3); firstFrameBody.ApplyTorque( 3); secondFrameBody.ApplyTorque( 3); rearWheelToChainBody.ApplyTorque( 3); chainToFrontFrameBody.ApplyTorque( 3); topMiddleFrameBody.ApplyTorque( 3); } // motor friction motorSpeed*=0.99; // motor max speed if (motorSpeed>100) { motorSpeed=100; } rearWheelJoint.SetMotorSpeed(motorSpeed); frontWheelJoint.SetMotorSpeed(motorSpeed); Any ideas what might be causing this? Thanks

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • Cocos2d sprite's parent not reflecting true scale value

    - by Paul Renton
    I am encountering issues with determining a CCSprite's parent node's scale value. In my game I have a class that extends CCLayer and scales itself based on game triggers. Certain child sprites of this CCLayer have mathematical calculations that become inaccurate once I scale the parent CCLayer. For instance, I have a tank sprite that needs to determine its firing point within the parent node. Whenever I scale the layer and ask the layer for its scale values, they are accurate. However, when I poll the sprites contained within the layer for their parent's scale values, they always appear as one. // From within the sprite CCLOG(@"ChildSprite-> Parent's scale values are scaleX: %f, scaleY: %f", self.parent.scaleX, self.parent.scaleY); // Outputs 1.0,1.0 // From within the layer CCLOG(@"Layer-> ScaleX : %f, ScaleY: %f , SCALE: %f", self.scaleX, self.scaleY, self.scale); // Output is 0.80,0.80 Could anyone explain to me why this is the case? I don't understand why these values are different. Maybe I don't understand the inner design of Cocos2d fully. Any help is appreciated.

    Read the article

  • How to make an object stay relative to another object

    - by Nick
    In the following example there is a guy and a boat. They have both a position, orientation and velocity. The guy is standing on the shore and would like to board. He changes his position so he is now standing on the boat. The boat changes velocity and orientation and heads off. My character however has a velocity of 0,0,0 but I would like him to stay onboard. When I move my character around, I would like to move as if the boat was the ground I was standing on. How do keep my character aligned properly with the boat? It is exactly like in World Of Warcraft, when you board a boat or zeppelin. This is my physics code for the guy and boat: this.velocity.addSelf(acceleration.multiplyScalar(dTime)); this.position.addSelf(this.velocity.clone().multiplyScalar(dTime)); The guy already has a reference to the boat he's standing on, and thus knows the boat's position, velocity, orientation (even matrices or quaternions can be used).

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Simulating an object floating on water

    - by Aaron M
    I'm working on a top down fishing game. I want to implement some physics and collision detection regarding the boat moving around the lake. I would like for be able to implement thrust from either the main motor or trolling motor, the effect of wind on the object, and the drag of the water on the object. I've been looking at the farseer physics engine, but not having any experience using a physics engine, I am not quite sure that farseer is suitable for this type of thing(Most of the demos seem to be the application of gravity to a vertical top/down type model). Would the farseer engine be suitable? or would a different engine be more suitable?

    Read the article

  • Is there any existing (old) game that released graphic as free or open source?

    - by Alexey Petrushin
    I'd like to (re)create an online version (html5/JS) of some old game, for example something like HoM&M 2. Maybe, some of old games were released as free or open source (I'm interested in the graphical assets only)? I heard something about Red Alert been released as free, but I'm not sure if it's permitted to reuse graphical assets in such manner. Do You know such games? Another question - can You please share Your thoughts, rough estimate - how much it will cost to pay an artist to create graphics similar to HoM&M 2?

    Read the article

  • Adding interactive graphical elements to text-based browser game with HTML5

    - by st9
    I'm re-writing an old virtual world/browser based game. It is text and HTML form based with some static graphics. The client is HTML and JS. I want to introduce some interactive graphical elements to certain parts of the game, for example a 'customise character' page, with hooks to server side and local data storage. I want to use HTML5/JS, what is the best approach to designing the web-site? For example could I use Boilerplate and then embed these interactive elements in the page? Thanks

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Migration from XNA to SharpDX

    - by Wouter
    My fear is that XNA has reached the end of the road. To keep up with the latest technology a shift to another game framework might be needed. We have many games in a large codebase, all based on XNA. My question is, how much work would it be to migrate to SharpDX and are there other possibilities? Our code base mainly uses basic 3D rendering and the SpriteBatch, no fancy shader stuff. Update: I should have mentioned we only use 2.5D, we have a simple engine that builds textured quads to render text and animated sprites. Also for sound we use XACT (what else..) with some effects.

    Read the article

  • Identify which CCSprite is touched in Cocos2d

    - by PeterK
    I am trying to learn Cocos2d and is experimenting with Ray Wenderlich tutorial whack-a-mole: www.raywenderlich.com/2560/how-to-create-a-mole-whacking-game-with-cocos2d-part-1 In this tutorial three CCSprite's are popping up and you should click on them... However, i am trying to identify which mole, rat in my case, is popping up and place a CCSprite above that. Initially this looked like an easy task but i am failing. I am trying to NSLog LEFT HIT. i would guess the problem is in the If-statement and the last "227" height parameter. The left rat boundingBox = {{99.5, 146.5}, {165, 227}} (from NSLog). The key code is in the ccTouchBegan function: -(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; for (CCSprite *rat in rats) { if (rat.userData == FALSE) continue; if (CGRectContainsPoint(rat.boundingBox, touchLocation)) { //left: rat boundingBox = {{99.5, 146.5}, {165, 227}} //mid: rat boundingBox = {{349.5, 146.5}, {165, 227}} //right: rat boundingBox = {{599.5, 146.5}, {165, 227}} //>>>>Here is where i try to get a hit<<<< if (CGRectContainsPoint(CGRectMake(99.5, 146.55, 165, 227), touchLocation)) { NSLog(@">>>>HIT LEFT<<<<<"); } I would really appreciate a few ideas how to get this to work.

    Read the article

  • In a Tower defense game, how to do buffs/debuffs

    - by Gabe
    The question is at the very bottom. If you understand Buffs/Debuffs in tower defense games then you should skip the bulk of this question and go to the bottom (seperated with the long line) I plan on making an IPhone TD game. The fact that its an iPhone game isn't relevant but I am coding in Objective-c with Cocos2D. I am relatively inexperienced in the field of game design so I'm looking for some advice from someone experienced in this field. In tower defense, there are two things that are relevant to my question: towers/enemies (both have their own classes/children). They each have stats like hp, damage, speed, etc. I want to add buffs/defuffs, for instance: Towers A,B and C each have 15 base damage. Tower D would be a buff tower with no damage, a tower with an AOE(area of effect) aura that gives 10% damage to all towers in range. Tower E might slow enemies in its AOE, a debuff. Stuff like that. The same could go for enemies. Enemy A is a boss that has a slow aura that affects towers and slows their base attack speed or something along those lines. So the question is, what would be the most effective way to implement this? If it was just towers then I would just mess around with the tower classes, but since tower classes and enemy classes are both affected, should I make a buff class? TD games can consume quite a bit of memory with large amounts of creeps and towers, and buffs I feel like would also consume quit a bit... So I'm trying to be as effective as possible.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >