Search Results

Search found 26124 results on 1045 pages for 'unreal development kit'.

Page 447/1045 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • Using Appendbuffers in unity for terrain generation

    - by Wardy
    Like many others I figured I would try and make the most of the monster processing power of the GPU but I'm having trouble getting the basics in place. CPU code: using UnityEngine; using System.Collections; public class Test : MonoBehaviour { public ComputeShader Generator; public MeshTopology Topology; void OnEnable() { var computedMeshPoints = ComputeMesh(); CreateMeshFrom(computedMeshPoints); } private Vector3[] ComputeMesh() { var size = (32*32) * 4; // 4 points added for each x,z pos var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append); Generator.SetBuffer(0, "vertexBuffer", buffer); Generator.Dispatch(0, 1, 1, 1); var results = new Vector3[size]; buffer.GetData(results); buffer.Dispose(); return results; } private void CreateMeshFrom(Vector3[] generatedPoints) { var filter = GetComponent<MeshFilter>(); var renderer = GetComponent<MeshRenderer>(); if (generatedPoints.Length > 0) { var mesh = new Mesh { vertices = generatedPoints }; var colors = new Color[generatedPoints.Length]; var indices = new int[generatedPoints.Length]; //TODO: build this different based on topology of the mesh being generated for (int i = 0; i < indices.Length; i++) { indices[i] = i; colors[i] = Color.blue; } mesh.SetIndices(indices, Topology, 0); mesh.colors = colors; mesh.RecalculateNormals(); mesh.Optimize(); mesh.RecalculateBounds(); filter.sharedMesh = mesh; } else { filter.sharedMesh = null; } } } GPU code: #pragma kernel Generate AppendStructuredBuffer<float3> vertexBuffer : register(u0); void genVertsAt(uint2 xzPos) { //TODO: put some height generation code here. // could even run marching cubes / dual contouring code. float3 corner1 = float3( xzPos[0], 0, xzPos[1] ); float3 corner2 = float3( xzPos[0] + 1, 0, xzPos[1] ); float3 corner3 = float3( xzPos[0], 0, xzPos[1] + 1); float3 corner4 = float3( xzPos[0] + 1, 0, xzPos[1] + 1 ); vertexBuffer.Append(corner1); vertexBuffer.Append(corner2); vertexBuffer.Append(corner3); vertexBuffer.Append(corner4); } [numthreads(32, 1, 32)] void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID) { uint2 currentXZ = unint2( groupId.x * 32 + threadId.x, groupId.z * 32 + threadId.z); genVertsAt(currentXZ); } Can anyone explain why when I call "buffer.GetData(results);" on the CPU after the compute dispatch call my buffer is full of Vector3(0,0,0), I'm not expecting any y values yet but I would expect a bunch of thread indexes in the x,z values for the Vector3 array. I'm not getting any errors in any of this code which suggests it's correct syntax-wise but maybe the issue is a logical bug. Also: Yes, I know I'm generating 4,000 Vector3's and then basically round tripping them. However, the purpose of this code is purely to learn how round tripping works between CPU and GPU in Unity.

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • Convex Hull for Concave Objects

    - by Lighthink
    I want to implement GJK and I want it to handle concave shapes too (almost all my shapes are concave). I've thought of decomposing the concave shape into convex shapes and then building a hierarchical tree out of convex shapes, but I do not know how to do it. Nothing I could find on the Internet about it wasn't satisfying my needs, so maybe someone can point me in the right direction or give a full explanation.

    Read the article

  • What are good JS libraries for game dev?

    - by acidzombie24
    If I decide to write a simple game both text and graphical (2d) what libraries would I use? (Assume we are using a HTML5 compatible browser) The main things I can think of Rendering text on screen Animating sprites (using images/css) Input (capturing the arrow keys and getting relative mouse positions) Perhaps some preloading resource or dynamically loading resources and choosing order Sound (but I am unsure how important this will be to me at first). Perhaps with mixing and chaining sounds or looping forever until stop. Networking (low priority) to connect a user to another or to continuously GET data without multiple request (I know this exist but I don't know how easy it is to setup or use. But this isn't important to me. Its for the question).

    Read the article

  • Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a 'dark' area?

    - by Athix
    In almost every game I've come across that includes a dark situation designed to change the way a user interacts with the environment, there are always some players who turn up their monitor's gamma correction in order to negate the desired effect. Is there a way to prevent users from adjusting their gamma correction to 'cheat' their way out of a challenge? (the darkness) I'd imagine if you could reliably retrieve the current gamma correction of the user's monitor, you could use that to more or less prevent the advantage it would otherwise grant without causing the normal users any inconvenience.

    Read the article

  • Saving an interface instance into a Bundle

    - by user22241
    All I've have an interface that allows me to switch between different scenes in my Android game. When the home key is pressed, I am saving all of my states (score, sprite positions etc) into a Bundle. When re-launching, I am restoring all my states and all is OK - however, I can't figure out how to save my 'Scene', thus when I return, it always starts at the default screen which is the 'Main Menu'. How would I go about saving my 'Scene' (into a Bundle)? Code import android.view.MotionEvent; public interface Scene{ void render(); void updateLogic(); boolean onTouchEvent(MotionEvent event); } I assume the interface is the relevant piece of code which is why I've posted that snippet. I set my scene like so: ('options' is an object of my Options class which extends MainMenu (Another custom class) which, in turn implements the interface 'Scene') SceneManager.getInstance().setCurrentScene(options); //Current scene is optionscreen

    Read the article

  • Java applet game design no keyboard focus

    - by Sri Harsha Chilakapati
    THIS IS PROBABLY THE WRONG PLACE. POSTED ITHERE (STACKOVERFLOW) I'm making an applet game and it is rendering, the game loop is running, the animations are updating, but the keyboard input is not working. Here's an SSCCE. public class Game extends JApplet implements Runnable { public void init(){ // Initialize the game when called by browser setFocusable(true); requestFocus(); requestFocusInWindow(); // Always returning false GInput.install(this); // Install the input manager for this class new Thread(this).start(); } public void run(){ startGameLoop(); } } And Here's the GInput class. public class GInput implements KeyListener { public static void install(Component c){ new GInput(c); } public GInput(Component c){ c.addKeyListener(this); } public void keyPressed(KeyEvent e){ System.out.println("A key has been pressed"); } ...... } This is my GInput class. When run as an applet, it doesn't work and when I add the Game class to a frame, it works properly. Thanks

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • Unity3D : Retry Menu (Scene Management)

    - by user3666251
    I'm making a simple 2D game for Android using the Unity3D game engine. I created all the levels and everything but I'm stuck at making the game over/retry menu. So far I've been using new scenes as a game over menu. I used this simple script: #pragma strict var level = Application.LoadLevel; function OnCollisionEnter(Collision : Collision) { if(Collision.collider.tag == "Player") { Application.LoadLevel("GameOver"); } } And this as a 'menu': #pragma strict var myGUISkin : GUISkin; var btnTexture : Texture; function OnGUI() { GUI.skin = myGUISkin; if (GUI.Button(Rect(Screen.width/2-60,Screen.height/2+30,100,40),"Retry")) Application.LoadLevel("Easy1"); if (GUI.Button(Rect(Screen.width/2-90,Screen.height/2+100,170,40),"Main Menu")) Application.LoadLevel("MainMenu"); } The problem stands at the part where I have to create over 200 game over scenes, obstacles (the objects that kill the player) and recreate the same script over 200 times for each level. Is there any other way to make this faster and less painful?

    Read the article

  • What are the reasons for MMOs to have level caps [on hold]

    - by SamStephens
    In many MMOs players character progression is artificially capped, e.g. by level 60 or 90 or 100 or whatever. Why do MMOs have these level caps in the first place? Why not just allow characters to continue to arbitrary levels with a mathematically designed leveling system that keeps the leveling experience interesting and endless? Answers to this question may help us to see the reason behind the feature and decide if and how this should be implemented in our MMOs.

    Read the article

  • Box2d world width and height ratio with screen width and height

    - by Sujith
    I have view, for example GameView which extends SurfaceView . I have integrated Box2D physics in GameView. I have two widths , GameView width, height and Box2D physics world width ,height. I need to get the position of box2d world with the GameView co-ordinates. For example, Total width of screen = 240 Total height of screen = 320 Screen points needed to be mapped onto box2d co-ordinates (x,y) = 127, 139 For this i need to get the max width and height of the Box2d physics world. Is there is any way to get the max width and height of Box2d world. or Can i limit the width and height of box2d world within the screen resolution.

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • OpenGL ES rotate texture

    - by 0xSina
    I just got started with OpenGL ES... I have a fragment: const char * sFragment = _STRINGIFY( varying highp vec2 coordinate; precision mediump float; uniform vec4 maskC; uniform float threshold; uniform sampler2D videoframe; uniform sampler2D videosprite; uniform vec4 mask; uniform vec4 maskB; uniform int recording; vec3 normalize(vec3 color, float meanr) { return color*vec3(0.75 + meanr, 1., 1. - meanr); } void main() { float d; float dB; float dC; float meanr; float meanrB; float meanrC; float minD; vec4 pixelColor; vec4 spriteColor; pixelColor = texture2D(videoframe, coordinate); spriteColor = texture2D(videosprite, coordinate); meanr = (pixelColor.r + mask.r)/8.; meanrB = (pixelColor.r + maskB.r)/8.; meanrC = (pixelColor.r + maskC.r)/8.; d = distance(normalize(pixelColor.rgb, meanr), normalize(mask.rgb, meanr)); dB = distance(normalize(pixelColor.rgb, meanrB), normalize(maskB.rgb, meanrB)); dC = distance(normalize(pixelColor.rgb, meanrC), normalize(maskC.rgb, meanrC)); minD = min(d, dB); minD = min(minD, dC); gl_FragColor = spriteColor; if (minD > threshold) { gl_FragColor = pixelColor; } } Now, depending on wether recording is 0 or 1, I want to rotate uniform sampler2D videosprite 180 degrees (reflection in x-axis, flip vertically). How can I do that? I found the function glRotatef(), but how do i specify that I want to rotate video sprite and not videoframe? Thanks

    Read the article

  • Error X3650 when compiling shader in XNA

    - by Saikai
    I'm attempting to convert the XBDEV.NET Mosaic Shader for use in my XNA project and having trouble. The compiler errors out because of the half globals. At first I tried replacing the globals and just writing the variables explicitly in the code, but that garbles the Output. Next I tried replacing all the half with float vars, but that still garbles the resulting Image. I call the effect file from SpriteBatch.Begin(). Is there a way to convert this shader to the new pixel shader conventions? Are there any good tutorials for this topic? Here is the shader file for reference: /*****************************************************************************/ /* File: tiles.fx Details: Modified version of the NVIDIA Composer FX Demo Program 2004 Produces a tiled mosaic effect on the output. Requires: Vertex Shader 1.1 Pixel Shader 2.0 Modified by: [email protected] (www.xbdev.net) */ /*****************************************************************************/ float4 ClearColor : DIFFUSE = { 0.0f, 0.0f, 0.0f, 1.0f}; float ClearDepth = 1.0f; /******************************** TWEAKABLES *********************************/ half NumTiles = 40.0; half Threshhold = 0.15; half3 EdgeColor = {0.7f, 0.7f, 0.7f}; /*****************************************************************************/ texture SceneMap : RENDERCOLORTARGET < float2 ViewportRatio = { 1.0f, 1.0f }; int MIPLEVELS = 1; string format = "X8R8G8B8"; string UIWidget = "None"; >; sampler SceneSampler = sampler_state { texture = <SceneMap>; AddressU = CLAMP; AddressV = CLAMP; MIPFILTER = NONE; MINFILTER = LINEAR; MAGFILTER = LINEAR; }; /***************************** DATA STRUCTS **********************************/ struct vertexInput { half3 Position : POSITION; half3 TexCoord : TEXCOORD0; }; /* data passed from vertex shader to pixel shader */ struct vertexOutput { half4 HPosition : POSITION; half2 UV : TEXCOORD0; }; /******************************* Vertex shader *******************************/ vertexOutput VS_Quad( vertexInput IN) { vertexOutput OUT = (vertexOutput)0; OUT.HPosition = half4(IN.Position, 1); OUT.UV = IN.TexCoord.xy; return OUT; } /********************************** pixel shader *****************************/ half4 tilesPS(vertexOutput IN) : COLOR { half size = 1.0/NumTiles; half2 Pbase = IN.UV - fmod(IN.UV,size.xx); half2 PCenter = Pbase + (size/2.0).xx; half2 st = (IN.UV - Pbase)/size; half4 c1 = (half4)0; half4 c2 = (half4)0; half4 invOff = half4((1-EdgeColor),1); if (st.x > st.y) { c1 = invOff; } half threshholdB = 1.0 - Threshhold; if (st.x > threshholdB) { c2 = c1; } if (st.y > threshholdB) { c2 = c1; } half4 cBottom = c2; c1 = (half4)0; c2 = (half4)0; if (st.x > st.y) { c1 = invOff; } if (st.x < Threshhold) { c2 = c1; } if (st.y < Threshhold) { c2 = c1; } half4 cTop = c2; half4 tileColor = tex2D(SceneSampler,PCenter); half4 result = tileColor + cTop - cBottom; return result; } /*****************************************************************************/ technique tiles { pass p0 { VertexShader = compile vs_1_1 VS_Quad(); ZEnable = false; ZWriteEnable = false; CullMode = None; PixelShader = compile ps_2_0 tilesPS(); } }

    Read the article

  • Units issue when exporting from 3DS Max to XNA

    - by miguelSantirso
    I am working on a XNA game where we have defined that 1 XNA unit equals to 1 meter. Then, I set meters as system unit in 3DS Max and set to meters the units in the FBX exporter. However, when I export my models, they are much bigger in the game. Am I missing something? What should I do to avoid problems with my units? Investigating the FBX file, I noticed that I it has two values called UnitScaleFactor and OriginalUnitScaleFactor. They both are 100 when I export the files... And if I manually change UnitScaleFactor to 1, it works fine :S

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • OnTriggerEnter not called

    - by Lautaro
    I am working on a fight game with 3D models but played like a 2D game. So the player characters have swords. The Player GameObject has several body parts that are colider triggers. The sword is a rigidbody colider. Ive had som problems with colisions not being detected. Ive added some Debug.Log and slowed downed the animations so what i can see is this: When players are close to each other the sword connects from a different angle. The OnTriggerStay is called several times BEFORE OnTriggerEnter is called if players are too close. Sometimes if too close the OnTriggerStay is called several times but the OnTriggerEnter is NEVER called. Any ideas on why this is?

    Read the article

  • How can I improve my Animation

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. Here is an example of a very basic jump I implemented. if(jumping) { height = sin(time); if(height < 0) jumping = false; // player landed player.position.z = height; } if(keydown(SPACE) && !jumping) { jumping = true; time = now(); // store the starting time } So my player jumped in a perfect sine function. That seems quite natural, because he slows down when he reached the top position, and in the fall he speeds up again. But patching every animation out of sine and cosine is stretched to its limits soon. So can I improve my animation and provide a more abstract layer?

    Read the article

  • 2D Platform Game Jumping

    - by Bradley Kreuger
    I'm currently writing a game in XNA for fun which uses C#. I have got my sprites loaded and when the character moves right he looks like he is running right and when he moves left he looks like he is running left. I been looking everywhere for a good coding example for how to create a jumping ability. I have read all the physics stuff that I can stand and it doesn't help when I can't figure out how to use say space bar to jump yet can't keep them from using space just jump again until they land.

    Read the article

  • Understanding Box2d Restitution & Bouncing

    - by layzrr
    I'm currently trying to implement basketball bouncing into my game using Box2d (jBox2d technically), but I'm a bit confused about restitution. While trying to create the ball in the testbed first, I've run into infinite bouncing, as described in this question, however obviously not using my own implementation. The Box2d manual describes restitution as follows: Restitution is used to make objects bounce. The restitution value is usually set to be between 0 and 1. Consider dropping a ball on a table. A value of zero means the ball won't bounce. This is called an inelastic collision. A value of one means the ball's velocity will be exactly reflected. This is called a perfectly elastic collision. My confusion lies in that I am still getting infinite bouncing with restitution values at 0.75/0.8. The same behavior can be seen in the testbed under Collision Watching - Varying Restitution, on the 6th and 7th balls. I believe the last one has restitution of 1, which makes sense, but I don't understand why the second to last ball bounces infinitely (as is happening with my working basketball I've created). I am looking to understand the restitution concept more fully, as well as look for a solution to infinite bouncing with the Box2d framework. My instinct was to sleep objects that appeared to be moving in very small increments, but this seems like a misuse of the engine. Should I just work with lower restitution values altogether?

    Read the article

  • Obtain rectangle indicating 2D world space camera can see

    - by Gareth
    I have a 2D tile based game in XNA, with a moveable camera that can scroll around and zoom. I'm trying to obtain a rectangle which indicates the area, in world space, that my camera is looking at, so I can render anything this rectangle intersects with (currently, everything is rendered). So, I'm drawing the world like this: _SpriteBatch.Begin( SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, // Don't smooth null, null, null, _Camera.GetTransformation()); The GetTransformation() method on my Camera object does this: public Matrix GetTransformation() { _transform = Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } The camera properties in the method above should be self explanatory. How can I get a rectangle indicating what the camera is looking at in world space?

    Read the article

  • 3D rotation matrices deform object while rotating

    - by Kevin
    I'm writing a small 3D renderer (using an orthographic projection right now). I've run into some trouble with my 3D rotation matrices. They seem to squeeze my 3D object (a box primitive) at certain angles. Here's a live demo (only tested in Google Chrome): http://dl.dropbox.com/u/109400107/3D/index.html The box is viewed from the top along the Y axis and is rotating around the X and Z axis. These are my 3 rotation matrices (Only rX and rZ are being used): var rX = new Matrix([ [1, 0, 0], [0, Math.cos(radiants), -Math.sin(radiants)], [0, Math.sin(radiants), Math.cos(radiants)] ]); var rY = new Matrix([ [Math.cos(radiants), 0, Math.sin(radiants)], [0, 1, 0], [-Math.sin(radiants), 0, Math.cos(radiants)] ]); var rZ = new Matrix([ [Math.cos(radiants), -Math.sin(radiants), 0], [Math.sin(radiants), Math.cos(radiants), 0], [0, 0, 1] ]); Before projecting the verticies I multiply them by rZ and rX like so: vert1.multiply(rZ); vert1.multiply(rX); vert2.multiply(rZ); vert2.multiply(rX); vert3.multiply(rZ); vert3.multiply(rX); The projection itself looks like this: bX = (pos.x + (vert1.x*scale)); bY = (pos.y + (vert1.z*scale)); Where "pos.x" and "pos.y" is an offset for centering the box on the screen. I just can't seem to find a solution to this and I'm still relativly new to working with Matricies. You can view the source-code of the demo page if you want to see the whole thing.

    Read the article

  • Drawing an outline around an arbitrary group of hexagons

    - by Perky
    Is there an algorithm for drawing an outline around around an arbitrary group of hexagons? The polygon outline drawn may be concave. See the images below, the green line is what I am trying to achieve. The hexagons are stored as vertices and drawn as polygons. Edit: I've uploaded images that should explain more. I want to favour convex hulls because it's conveys an area of control more quickly. Each hexagon is stored in a multidimensional array so they all have x and y coordinates, I can easily find adjacent hexagons and the opposite vertex, i.e. adjacentHexagon = getAdjacentHexagon( someHexagon, NORTHWEST ) if there isn't a hexagon immediately adjacent it will continue to search in that direction until it finds one or hits the map edges.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >