Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 307/540 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Vector vs Scalar velocity?

    - by Serguei Fedorov
    I am revamping an engine I have been working on and off on for the last few weeks to use a directional vector to dictate direction; this way I can dictate the displacement based on a direction. However, the issue I am trying to overcome is the following problem; the speed towards X and speed towards Y are unrelated to one another. If gravity pulls the object down by an increasing velocity my velocity towards the X should not change. This is very easy to implement if my speed is broken into a Vector datatype, Vector.X dictates one direction Vector.Y dictates the other (assuming we are not concerned about the Z axis). However, this defeats the purpose of the directional vector because: SpeedX = 10 SpeedY = 15 [1, 1] normalized = ~[0.7, 0.7] [0.7, 0.7] * [10, 15] = [7, 10.5] As you can see my direction is now "scaled" to my speed which is no longer the direction that I want to be moving in. I am very new to vector math and this is a learning project for me. I looked around a little bit on the internet but I still want to figure out things on my own (not just look at an example and copy off it). Is there way around this? Using a directional vector is extremely useful but I am a little bit stumped at this problem. I am sorry if my mathematical understanding maybe completely wrong.

    Read the article

  • Check if an object is facing another based on angles

    - by Isaiah
    I already have something that calculates the bearing angle to get one object to face another. You give it the positions and it returns the angle to get one to face the other. Now I need to figure out how tell if on object is facing toward another object within a specified field and I can't find any information about how to do this. The objects are obj1 and obj2. Their angles are at obj1.angle and obj2.angle. Their vectors are at obj1.pos and obj2.pos. It's in the format [x,y]. The angle to have one face directly at another is found with direction(obj1.pos,obj2.pos). I want to set the function up like this: isfacing(obj1,obj2,area){...} and return true/false depending if it's in the specified field area to the angle to directly see it. I've got a base like this: var isfacing = function (obj1,obj2,area){ var toface = direction(obj1.pos,obj2.pos); if(toface+area >= obj1.angle && ob1.angle >= toface-area){ return true; } return false; } But my problem is that the angles are in 360 degrees, never above 360 and never below 0. How can I account for that in this? If the first object's angle is say at 0 and say I subtract a field area of 20 or so. It'll check if it's less than -20! If I fix the -20 it becomes 340 but x < 340 isn't what I want, I'd have to x 340 in that case. Is there someone out there with more sleep than I that can help a new dev pulling an all-nighter just to get enemies to know if they're attacking in the right direction? I hope I'm making this harder than it seems. I'd just make them always face the main char if the producer didn't want attacks from behind to work while blocking. In which case I'll need the function above anyways. I've tried to give as much info as I can think would help. Also this is in 2d.

    Read the article

  • How to implement an intelligent enemy in a shoot-em-up?

    - by bummzack
    Imagine a very simple shoot-em-up, something we all know: You're the player (green). Your movement is restricted to the X axis. Our enemy (or enemies) is at the top of the screen, his movement is also restricted to the X axis. The player fires bullets (yellow) at the enemy. I'd like to implement an A.I. for the enemy that should be really good at avoiding the players bullets. My first idea was to divide the screen into discrete sections and assign weights to them: There are two weights: The "bullet-weight" (grey) is the danger imposed by a bullet. The closer the bullet is to the enemy, the higher the "bullet-weight" (0..1, where 1 is highest danger). Lanes without a bullet have a weight of 0. The second weight is the "distance-weight" (lime-green). For every lane I add 0.2 movement cost (this value is kinda arbitrary now and could be tweaked). Then I simply add the weights (white) and go to the lane with the lowest weight (red). But this approach has an obvious flaw, because it can easily miss local minima as the optimal place to go would be simply between two incoming bullets (as denoted with the white arrow). So here's what I'm looking for: Should find a way through bullet-storm, even when there's no place that doesn't impose a threat of a bullet. Enemy can reliably dodge bullets by picking an optimal (or almost optimal) solution. Algorithm should be able to factor in bullet movement speed (as they might move with different velocities). Ways to tweak the algorithm so that different levels of difficulty can be applied (dumb to super-intelligent enemies). Algorithm should allow different goals, as the enemy doesn't only want to evade bullets, he should also be able to shoot the player. That means that positions where the enemy can fire at the player should be preferred when dodging bullets. So how would you tackle this? Contrary to other games of this genre, I'd like to have only a few, but very "skilled" enemies instead of masses of dumb enemies.

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • How to get a point to the left/right of a vector

    - by MulletDevil
    I have a position vector of a point in space and a quaternion for it's rotation. What i'm trying to calculate is a point too the left and a point to the right. I have the position and rotation(quaternion) of the red dot. What I want is to get the position of the green dots. I have a float value for the distance I want these points to be. With only the position and rotation is it possible to get a unit direction vector pointing left/right which I can multiply by my float value? Edit: I also know the original direction vector.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

  • Resolving a collision between point and moving line

    - by Conundrumer
    I am designing a 2d physics engine that uses Verlet integration for moving points (velocities mentioned below can be derived), constraints to represent moving line segments, and continuous collision detection to resolve collisions between moving points and static lines, and collisions between moving/static points and moving lines. I already know how to calculate the Time of Impact for both types of collision events, and how to resolve moving point static line collisions. However, I can't figure out how to resolve moving/static point moving line collisions. Here are the initial conditions in a point and moving line collision event. We have a line segment joined by two points, A and B. At this instant, point P is touching/colliding with line AB. These points have unit mass and some might have an initial velocity, unless point P is static. The line is massless and has no explicit rotational component, since points A and B could freely move around, extending or contracting the line as a result (which will be fixed later by the constraint solver). Collision is inelastic. What are the final velocities of the points after collision?

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • Rotating a cube using jBullet collisions

    - by Kenneth Bray
    How would one go about rotating/flipping a cube with the physics of jBullet? Here is my Draw method for my cube object: public void Draw() { // center point posX, posY, posZ float radius = .25f;//size / 2; glPushMatrix(); glBegin(GL_QUADS); //top { glColor3f(5.0f,1.0f,5.0f); // white glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } //bottom { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } //right side { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } //left side { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } //front side { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } //back side { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); Update(); } This is my update method for the cube position: public void Update() { Transform trans = new Transform(); cubeRigidBody.getMotionState().getWorldTransform(trans); posX = trans.origin.x; posY = trans.origin.y; posZ = trans.origin.z; Quat4f outRot = new Quat4f(); trans.getRotation(outRot); rotX = outRot.x; rotY = outRot.y; rotZ = outRot.z; rotW = outRot.w; } I am assuming I need to use glrotatef, but it does not seem to work at all when I try that.. this is how I have tried to rotate the cubes: GL11.glRotatef(rotW, rotX, 0.0f, 0.0f); GL11.glRotatef(rotW, 0.0f, rotY, 0.0f); GL11.glRotatef(rotW, 0.0f, 0.0f, rotZ);

    Read the article

  • 3D Modeling Software for Programmer [closed]

    - by Pathachiever11
    I've recently learned how to make games for Unity3d, and now I want to start making games! I can't wait to start! However, before I can make 3D games, I need to learn 3D modeling for character design, level design, and some animation. What is the easiest 3D modeling software, compatible with Unity3d? I do not want to spend too much time learning the software. From what I've heard, Blender is a bit complicated to use. Maya and 3dsMax seem very powerful. Could someone point me in the right direction? I don't want to spend a lot of time learning. I know its not that easy, but you guys have experience, you guys probably know out of all which one is easier and powerful. Could you recommend a software? Many Thanks!

    Read the article

  • what's wrong with this Lua code (creating text inside listener in Corona)

    - by Greg
    If you double/triple click on the myObject here the text does NOT disappear. Why is this not working when there are multiple events being fired? That is, are there actually multiple "text" objects, with some existing but no longer having a reference to them held by the local "myText" variable? Do I have to manually removeSelf() on the local "myText" field before assigning it another "display.newText(...)"? display.setStatusBar( display.HiddenStatusBar ) local myText local function hideMyText(event) print ("hideMyText") myText.isVisible = false end local function showTextListener(event) if event.phase == "began" then print("showTextListener") myText = display.newText("Hello World!", 0, 0, native.systemFont, 30) timer.performWithDelay(1000, hideMyText, 1 ) end end -- Display object to press to show text local myObject = display.newImage( "inventory_button.png", display.contentWidth/2, display.contentHeight/2) myObject:addEventListener("touch", showTextListener) Question 2 - Also why is it the case that if I add a line BEFORE "myText = ..." of: a) "if myText then myText:removeSelf() end" = THIS FIXES THINGS, whereas b) "if myText then myText=nil end" = DOES NOT FIX THINGS Interested in hearing how Lua works here re the answer...

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Lighting with VBO

    - by nkint
    I'm using a Java JOGL wrapper called processing.org. I have coded some enviroment on it and I'm quite proud of it even if it has some ready stuffs that I didn't know anything about it (==LIGHTS). Then, for some geometry, I've decided to use a VBO. I had to pass in the hard way and recode all lights. But I can't achieve the same result. This is the original light system: And this with VBO: With this code: Vec3D l; gl.glEnable(GL.GL_LIGHTING); gl.glEnable(GL.GL_LIGHT0); gl.glEnable(GL.GL_COLOR_MATERIAL); gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT, new float[]{0.8f,0f,0f}, 0); l = new Vec3D(0,0,-10); gl.glColor3f(0.8f,0f,0f); gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, new float[] { l.x, l.y, l.z, 0 }, 0); gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPOT_DIRECTION, new float[] { 1, 1, 1, 1 }, 0); I can't achive the same light, the same color material, and the same wireframe stuffs. If needed I can also post the code I use for VBO, but it is quite standard vertex array grabbed on the net that uses glDrawArrays

    Read the article

  • What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

    - by Rui d'Orey
    I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint : GLFW_RED_BITS GLFW_GREEN_BITS GLFW_BLUE_BITS GLFW_ALPHA_BITS GLFW_DEPTH_BITS GLFW_STENCIL_BITS What is the purpose of this? Usually their default values are enough? Where are those bits stored? In a buffer in the GPU? What do they affect? And by that I mean in what way Thank you in advance!

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • Continuous Collision Detection Techniques

    - by Griffin
    I know there are quite a few continuous collision detection algorithms out there , but I can't find a list or summary of different 2D techniques; only tutorials on specific algorithms. What techniques are out there for calculating when different 2D bodies will collide and what are the advantages / disadvantages of each? I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if an algorithm breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Moving while doing loop animation in RPGMaker

    - by AzDesign
    I made a custom class to display character portrait in RPGMaker XP Here is the class : class Poser attr_accessor :images def initialize @images = Sprite.new @images.bitmap = RPG::Cache.picture('Character.png') #100x300 @images.x = 540 #place it on the bottom right corner of the screen @images.y = 180 end end Create an event on the map to create an instance as global variable, tada! image popped out. Ok nice. But Im not satisfied, Now I want it to have bobbing-head animation-like (just like when we breathe, sometimes bob our head up and down) so I added another method : def move(x,y) @images.x += x @images.y += y end def animate(x,y,step,delay) forward = true 2.times { step.times { wait(delay) if forward move(x/step,y/step) else move(-x/step,-y/step) end } wait(delay*3) forward = false } end def wait(time) while time > 0 time -= 1 Graphics.update end end I called the method to run the animation and it works, so far so good, but the problem is, WHILE the portrait goes up and down, I cannot move my character until the animation is finished. So that's it, I'm stuck in the loop block, what I want is to watch the portrait moving up and down while I walk around village, talk to npc, etc. Anyone can solve my problem ? Or better solution ? Thanks in advance

    Read the article

  • Make objects slide across the screen in random positions

    - by user3475907
    I want to make an object appear randomly at the right hand side of the screen and then slide across the screen and disapear at the left hand side. I am working with libgdx. I have this bit of code but it makes items fall from the top down. Please help. public EntityManager(int amount, OrthoCamera camera) { player = new Player(new Vector2(15, 230), new Vector2(0, 0), this, camera); for (int i = 0; i < amount; i++) { float x = MathUtils.random(0, MainGame.HEIGHT - TextureManager.ENEMY.getHeight()); float y = MathUtils.random(MainGame.WIDTH, MainGame.WIDTH * 10); float speed = MathUtils.random(2, 10); addEntity(new Enemy(new Vector2(x, y), new Vector2(-0, -speed))); }

    Read the article

  • How do I find which isometric tiles are inside the cameras current view?

    - by Steve
    I'm putting together an isometric engine and need to cull the tiles that aren't in the camera's current view. My tile coordinates go from left to right on the X and top to bottom on the Y with (0,0) being the top left corner. If I have access to say the top left, top right, bottom left and bottom right corner coordinates, is there a formula or something I could use to determine which tiles fall in range? This is a screenshot of the layout of the tiles for reference. If there isn't one, or there's a better way to determine which tiles are on screen and which to cull, I'm all ears and am grateful for any ideas. I've got a few other methods I may be able to try such as checking the position of the tile against a rectangle. I pretty much just need something quick. Thanks for giving this a read =)

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >