Search Results

Search found 12087 results on 484 pages for 'game mechanics'.

Page 300/484 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • AABB - AABB Collision, which face do I hit?

    - by PeeS
    To allow my objects to slide when they collide, I need to : Know which face of the AABB they collide with. Calculate the normal to that face. Return the normal and calculate the impulse that to apply to the player's velocity. Question How can I calculate which face of the AABB I collided with, knowing that I have two AABB's colliding? One is the player and the other is a world object. Here's what that looks like (problem collision circled in white): Thank you for your help.

    Read the article

  • fragment shader directional light positioning with camera

    - by meWantToLearn
    Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. #version 150 core uniform sampler2D diffuseTex; uniform vec4 lightColour; uniform vec3 lightDirection; vec3 LNorm = normalize(lightDirection); vec3 normal = normalize(IN.normal); vec3 calColour = lightColour[i].rgb * intensity; gl_FragColor = vec4(diffuse.rbg * calColour, diffuse.a); It lights the entire scene.

    Read the article

  • openGL Camera setup for Zoom in/out centered at point under cursor

    - by user3228921
    I am trying to implement a zoom in/out navigation mode in a openGL 3dViewer. I was able to implement zoom functionality centered at screen center just by moving eye towards the center in perspective mode. Now i am trying to do the zoom centered at arbitrary position under the cursor. I am unable to figure out how should i move my camera forward and backward such that point under cursor remains at the same screen coordinates after zoom in/out. Any help would be appreciated. Below are the images which show the desired effect. Just to mention, I am working in a perspective mode with eye target and up vectors to control camera. Same effect i found in google sketchup and 'zoom to mouse position' setting in blender.

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

  • Why are trees shining in background?

    - by Kinected
    Currently I am creating a forest scene in the dark, and the trees are shining far away, but when I get close they are fine. I have the shaders set to "Nature/Tree Soft Occlusion [bark/leaves]", but they are still rendering strange far away, but close they are fine. I tried placing the trees in a folder named "Ambient-Occlusion" like said here, but no luck. Also fog is turned off. Thanks in advance.

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • How do I prevent a KActor from changing the orientation of its Z-Axis?

    - by Almo
    So I have an object that inherits from KActor that I would like to behave as a dynamic physics object, but I want its Z-Axis to remain upright, but very stiffly. I've tried the bStayUpright that triggers the "Stay Upright Spring". The problem is, it's a spring, and the object in question oscillates into position when I want it to remain oriented properly without wobbling. In the image above, the yellow block has fallen onto the gray box, and it is currently pivoting about the contact point as it tries to right itself. Should I be tweaking the StayUprightMaxTorque and StayUprightTorqueFactor parameters, or should I be using a Constraint of some sort?

    Read the article

  • Triangle Picking Picking Back faces

    - by Tangeleno
    I'm having a bit of trouble with 3D picking, at first I thought my ray was inaccurate but it turns out that the picking is happening on faces facing the camera and faces facing away from the camera which I'm currently culling. Here's my ray creation code, I'm pretty sure the problem isn't here but I've been wrong before. private uint Pick() { Ray cursorRay = CalculateCursorRay(); Vector3? point = Control.Mesh.RayCast(cursorRay); if (point != null) { Tile hitTile = Control.TileMesh.GetTileAtPoint(point); return hitTile == null ? uint.MaxValue : (uint)(hitTile.X + hitTile.Y * Control.Generator.TilesWide); } return uint.MaxValue; } private Ray CalculateCursorRay() { Vector3 nearPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 0f)); Vector3 farPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 1f)); Vector3 direction = farPoint - nearPoint; direction.Normalize(); return new Ray(nearPoint, direction); } public Vector3 Camera.Unproject(Vector3 source) { Vector4 result; result.X = (source.X - _control.ClientRectangle.X) * 2 / _control.ClientRectangle.Width - 1; result.Y = (source.Y - _control.ClientRectangle.Y) * 2 / _control.ClientRectangle.Height - 1; result.Z = source.Z - 1; if (_farPlane - 1 == 0) result.Z = 0; else result.Z = result.Z / (_farPlane - 1); result.W = 1f; result = Vector4.Transform(result, Matrix4.Invert(ProjectionMatrix)); result = Vector4.Transform(result, Matrix4.Invert(ViewMatrix)); result = Vector4.Transform(result, Matrix4.Invert(_world)); result = Vector4.Divide(result, result.W); return new Vector3(result.X, result.Y, result.Z); } And my triangle intersection code. Ripped mainly from the XNA picking sample. public float? Intersects(Ray ray) { float? closestHit = Bounds.Intersects(ray); if (closestHit != null && Vertices.Length == 3) { Vector3 e1, e2; Vector3.Subtract(ref Vertices[1].Position, ref Vertices[0].Position, out e1); Vector3.Subtract(ref Vertices[2].Position, ref Vertices[0].Position, out e2); Vector3 directionCrossEdge2; Vector3.Cross(ref ray.Direction, ref e2, out directionCrossEdge2); float determinant; Vector3.Dot(ref e1, ref directionCrossEdge2, out determinant); if (determinant > -float.Epsilon && determinant < float.Epsilon) return null; float inverseDeterminant = 1.0f/determinant; Vector3 distanceVector; Vector3.Subtract(ref ray.Position, ref Vertices[0].Position, out distanceVector); float triangleU; Vector3.Dot(ref distanceVector, ref directionCrossEdge2, out triangleU); triangleU *= inverseDeterminant; if (triangleU < 0 || triangleU > 1) return null; Vector3 distanceCrossEdge1; Vector3.Cross(ref distanceVector, ref e1, out distanceCrossEdge1); float triangleV; Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1, out triangleV); triangleV *= inverseDeterminant; if (triangleV < 0 || triangleU + triangleV > 1) return null; float rayDistance; Vector3.Dot(ref e2, ref distanceCrossEdge1, out rayDistance); rayDistance *= inverseDeterminant; if (rayDistance < 0) return null; return rayDistance; } return closestHit; } I'll admit I don't fully understand all of the math behind the intersection and that is something I'm working on, but my understanding was that if rayDistance was less than 0 the face was facing away from the camera, and shouldn't be counted as a hit. So my question is, is there an issue with my intersection or ray creation code, or is there another check I need to perform to tell if the face is facing away from the camera, and if so any hints on what that check might contain would be appreciated.

    Read the article

  • C# Collision test of a ship and asteriod, angle confusion

    - by Cherry
    We are trying to to do a collision detection for the ship and asteroid. If success than it should detect the collision before N turns. However it is confused between angle 350 and 15 and it is not really working. Sometimes it is moving but sometime it is not moving at all. On the other hand, it is not shooting at the right time as well. I just want to ask how to make the collision detection working??? And how to solve the angle confusion problem? // Get velocities of asteroid Console.WriteLine("lol"); // IF equation is between -2 and -3 if (equation1a <= -2) { // Calculate no. turns till asteroid hits float turns_till_hit = dx / vx; // Calculate angle of asteroid float asteroid_angle_rad = (float)Math.Atan(Math.Abs(dy / dx)); float asteroid_angle_deg = (float)(asteroid_angle_rad * 180 / Math.PI); float asteroid_angle = 0; // Calculate angle if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { asteroid_angle = asteroid_angle_deg; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { asteroid_angle = (360 - asteroid_angle_deg); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 + asteroid_angle_deg); } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 - asteroid_angle_deg); } // IF turns till asteroid hits are less than 35 if (turns_till_hit < 50) { float angle_between = 0; // Calculate angle between if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { angle_between = (360 - Math.Abs(ship_angle - asteroid_angle)); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } // If angle less than 0, add 360 if (angle_between < 0) { //angle_between %= 360; angle_between = Math.Abs(angle_between); } // Calculate no. of turns to face asteroid float turns_to_face = angle_between / 25; if (turns_to_face < turns_till_hit) { float ship_angle_left = ShipAngle(ship_angle, "leftKey", 1); float ship_angle_right = ShipAngle(ship_angle, "rightKey", 1); float angle_between_left = Math.Abs(ship_angle_left - asteroid_angle); float angle_between_right = Math.Abs(ship_angle_right - asteroid_angle); if (angle_between_left < angle_between_right) { leftKey = true; } else if (angle_between_right < angle_between_left) { rightKey = true; } } if (angle_between > 0 && angle_between < 25) { spaceKey = true; } } }

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • Any way to edit Warcraft MDX or MDL Animated models?

    - by Aralox
    I have been searching for a while for a way to get an animated mdl or mdx model into any 3D animating software (such as Blender), but so far have not had any success. I found a few methods of getting textured static mdx or mdl models into Blender/Milkshape/Hexagon, but no one seems to have written an importer that deals with the mdl/mdx model's keyframe animation. On that note, if anyone knows of a way of importing a keyframe-animated 3DS model into Blender, me and alot of people would appreciate it if you could let us know. Thanks for any help! :) PS: For anyone curious about static MDL or MDX - Blender, see here: http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/WarCraft_MDL

    Read the article

  • Creating Rectangle-based buttons with OnClick events

    - by Djentleman
    As the title implies, I want a Button class with an OnClick event handler. It should fire off connected events when it is clicked. This is as far as I've made it: public class Button { public event EventHandler OnClick; public Rectangle Rec { get; set; } public string Text { get; set; } public Button(Rectangle rec, string text) { this.Rec = rec; this.Text = text; } } I have no clue what I'm doing with regards to events. I know how to use them but creating them myself is another matter entirely. I've also made buttons without using events that work on a case-by-case basis. So basically, I want to be able to attach methods to the OnClick EventHandler that will fire when the Button is clicked (i.e., the mouse intersects Rec and the left mouse button is clicked).

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Well-tested libraries for player ratings?

    - by Lucky
    It's common in games to implement some sort of numerical ranking system -- the ELO system is usually used in chess. I could implement this system naively using Wikipedia's descriptions, but I suspect that this would open up a whole box of problems that have already been solved: rating inflation, etc -- for instance, the ELO system has a K constant that's 'fudged' according to rating, duration, pairings, statistics, ... What are some libraries (I'm looking at Python, but anything is okay) that implements rating systems? It also doesn't have to be ELO.

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >