Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 414/1027 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • Should I always be checking every neighbor when building voxel meshes?

    - by Raven Dreamer
    I've been playing around with Unity3d, seeing if I can make a voxel-based engine out of it (a la Castle Story, or Minecraft). I've dynamically built a mesh from a volume of cubes, and now I'm looking into reducing the number of vertices built into each mesh, as right now, I'm "rendering" vertices and triangles for cubes that are fully hidden within the larger voxel volume. The simple solution is to check each of the 6 directions for each cube, and only add the face to the mesh if the neighboring voxel in that direction is "empty". Parsing a voxel volume is BigO(N^3), and checking the 6 neighbors keeps it BigO(7*N^3)-BigO(N^3). The one thing this results in is a lot of redundant calls, as the same voxel will be polled up to 7 times, just to build the mesh. My question, then, is: Is there a way to parse a cubic volume (and find which faces have neighbors) with fewer redundant calls? And perhaps more importantly, does it matter (as BigO complexity is the same in both cases)?

    Read the article

  • Fitting a rectangle into screen with XNA

    - by alecnash
    I am drawing a rectangle with primitives in XNA. The width is: width = GraphicsDevice.Viewport.Width and the height is height = GraphicsDevice.Viewport.Height I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far. This is what I am using to get the camera distance: //Height of piramid float alpha = 0; float beta = 0; float gamma = 0; alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2)); beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2); gamma = (float)Math.Sqrt(beta*beta - alpha*alpha); position = new Vector3(0, 0, gamma); Any idea where to put the camera on the Z-axis?

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Marketing: Angry Birds - How it's done

    - by John
    Why do some apps, like Angry Birds, dominate the market while other cool/fun/addicting apps are never heard of? I'm trying to figure out the best marketing strategy, or best way to sell an app to mass market. Does anybody have any ideas or things they noticed about the marketing of major blockbuster apps, like Angry Birds, why they get so popular and stay at the top of charts. Thanks for any ideas, comments ...

    Read the article

  • How was collision detection handled in The Legend of Zelda: A Link to the Past?

    - by Restart
    I would like to know how the collision detection was done in The Legend of Zelda: A Link To The Past. The game is 16x16 tile based, so how did they do the tiles where only a quarter or half of the tile is occupied? Did they use a smaller grid for collision detection like 8x8 tiles, so four of them make one 16x16 tile of the texture grid? But then, they also have true half tiles which are diagonally cut and the corners of the tiles seem to be round or something. If Link walks into tiles corner he can keep on walking and automatically moves around it's corner. How is that done? I hope someone can help me out here.

    Read the article

  • what tool should I use for drawing 2d OpenGL shapes?

    - by Kenny Winker
    I'm working on a very simple OpenGL ES 2.0 game, and I'm not sure what tool to use to create the vertex data I need. My first thought was adobe illustrator, but I can't seem to google up any info on how to convert an .ai file to vertices. I'm only going to be using very simple 2d shapes so I wonder if I need to use a 3d modelling program? How is this typically done, when you are working with 2d non-sprite shapes?

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • How can I create and animate 2D skeletons for HTML5 Javascript games? [on hold]

    - by user414209
    I'm trying to make a 2D fighting game in HTML5(somewhat like street fighter). So basically there are two players, one AI and one Human. The players need to have animations for the body movements. Also, there needs to be some collision detection system. I'm using createjs for coding but to design models/objects/animations, I need some other software. So I'm looking for a software that can: easily make custom animation of 2d objects. The objects structure(skeleton etc.) will be same once defined but need to be defined once. Can export the animations and models in a js readable format(preferably json) Collision detection can be done easily after the exported format is loaded in a game engine. For point 1, I'm looking for some generic skeleton based animation. Sprite-sheet based animations will be difficult for collision detection.

    Read the article

  • Random World Generation

    - by Alex Larsen
    I'm making a game like minecraft (although a different idea) but I need a random world generator for a 1024 block wide and 256 block tall map. Basically so far I have a multidimensional array for each layer of blocks (a total of 262,114 blocks). This is the code I have now: Block[,] BlocksInMap = new Block[1024, 256]; public bool IsWorldGenerated = false; Random r = new Random(); private void RunThread() { for (int BH = 0; BH <= 256; BH++) { for (int BW = 0; BW <= 1024; BW++) { Block b = new Block(); if (BH >= 192) { } BlocksInMap[BW, BH] = b; } } IsWorldGenerated = true; } public void GenWorld() { new Thread(new ThreadStart(RunThread)).Start(); } I want to make tunnels and water but the way blocks are set is like this: Block MyBlock = new Block(); MyBlock.BlockType = Block.BlockTypes.Air; How would I manage to connect blocks so the land is not a bunch of floating dirt and stone?

    Read the article

  • Splitting Graph into distinct polygons in O(E) complexity

    - by Arthur Wulf White
    If you have seen my last question: trapped inside a Graph : Find paths along edges that do not cross any edges How do you split an entire graph into distinct shapes 'trapped' inside the graph(like the ones described in my last question) with good complexity? What I am doing now is iterating over all edges and then starting to traverse while always taking the rightmost turn. This does split the graph into distinct shapes. Then I eliminate all the excess shapes (that are repeats of previous shapes) and return the result. The complexity of this algorithm is O(E^2). I am wondering if I could do it in O(E) by removing edges I already traversed previously. My current implementation of that returns unexpected results.

    Read the article

  • Procedural Mesh: UV mapping

    - by Esa
    I made a procedural mesh and now I want to apply a texture to it. The problem is, I cannot get it to stick the way I want it to. The idea is to have the texture painted only once over the whole mesh, so that there is no repeating. How should I map the UV to make that happen? My mesh is a simple plane consisting of 56 triangles. I'd add pictures to clear things up but I cannot since my reputation is below 10 points. Any help is appreciated. EDIT(Kind people gave me up votes, thank you): Meet my mesh: And when textured(tried to repeat the texture): And my texture:

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • Appropriate level of granularity for component-based architecture

    - by Jon Purdy
    I'm working on a game with a component-based architecture. An Entity owns a set of Component instances, each of which has a set of Slot instances with which to store, send, and receive values. Factory functions such as Player produce entities with the required components and slot connections. I'm trying to determine the best level of granularity for components. For example, right now Position, Velocity, and Acceleration are all separate components, connected in series. Velocity and Acceleration could easily be rewritten into a uniform Delta component, or Position, Velocity, and Acceleration could be combined alongside such components as Friction and Gravity into a monolithic Physics component. Should a component have the smallest responsibility possible (at the cost of lots of interconnectivity) or should related components be combined into monolithic ones (at the cost of flexibility)? I'm leaning toward the former, but I could use a second opinion.

    Read the article

  • Handling Players, enemies and attacks in HTML5

    - by Chris Morris
    I'm building a simple (currently) game with free roaming player and monsters on a map built by a 2D grid. I've been looking at the methods for implementing characters and enemies onto the screen and I've seen two seperate methods for doing this online. Drawing the player onto the screen canvas directly and refreshing the entire screen every FPS tick. Having a separate canvas to handle the player and moving the player canvas on top of the screen canvas via absolute positioning. I can see some pros and cons of both methods but what is generally the best method for doing this? I assume the second due to not having to drain resources by refreshing the map when the user is not moving, but the type of game will generally have constant movement.

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • SceneManagers as systems in entity system or as a core class used by a system?

    - by Hatoru Hansou
    It seems entity systems are really popular here. Links posted by other users convinced me of the power of such system and I decided to try it. (Well, that and my original code getting messy) In my project, I originally had a SceneManager class that maintained needed logic and structures to organize the scene (QuadTree, 2D game). Before rendering I call selectRect() and pass the x,y of the camera and the width and height of the screen and then obtain a minimized list containing only visible entities ordered from back to front. Now with Systems, originally in my first attempt my Render system required to get added all entities it should handle. This may sound like the correct approach but I realized this was not efficient. Trying to optimize It I reused the SceneManager class internally in the Renderer system, but then I realized I needed methods such as selectRect() in others systems too (AI principally) and make the SceneManager accessible globally again. Currently I converted SceneManager to a system, and ended up with the following interface (only relevant methods): /// Base system interface class System { public: virtual void tick (double delta_time) = 0; // (methods to add and remove entities) }; typedef std::vector<Entity*> EntitiesVector; /// Specialized system interface to allow query the scene class SceneManager: public System { public: virtual EntitiesVector& cull () = 0; /// Sets the entity to be used as the camera and replaces previous ones. virtual void setCamera (Entity* entity) = 0; }; class SceneRenderer // Not a system { vitual void render (EntitiesVector& entities) = 0; }; Also I could not guess how to convert renderers to systems. My game separates logic updates from screen updates, my main class have a tick() method and a render() method that may not be called the same times. In my first attempt renderers were systems but they was saved in a separated manager, updated only in render() and not in tick() like all other systems. I realized that was silly and simply created a SceneRenderer interface and give up about converting them to systems, but that may be for another question. Then... something does not feel right, isn't it? If I understood correctly a system should not depend on another or even count with another system exposing an specific interface. Each system should care only about its entities, or nodes (as optimization, so they have direct references to relevant components without having to constantly call the component() or getComponent() method of the entity).

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • How to create a 3D world with 2D sprites similar to Ragnorak online?

    - by Romoku
    As far as I know Ragnorak Online is a 3D game world with 2D sprites overlayed. I would like to use this style in a game I am making in Unity, so I would like the player to be able to select little square tiles on the terrain. There are a couple routes I could take such as using a bunch of cubic polygons and linking them together or using one big map. The former approach doesn't seem to make any sense if the world is not flat as polygons wouldn't be reused often. The goal is to break down a 3D polygon into tiles which is heard to wrap my head around. I believe using something like an interval tree or array would be appropriate to store the rectangle grid, but how would I display a rectangle around the selection the player has his mouse over on the polygon terrain itself? Here is a screenshot. Here is a gameplay video. Here is the camera usage.

    Read the article

  • Importing a windows project into android using cocos2d-x

    - by Ef Es
    What I am trying to do today is to import a full project to Android, but no tutorials are available for that that I have seen. My approach was to create a new android project, copy all the classes and resources in the folders and calling ./build_native.sh but I get an error because most of the files are not being included in the project. I tried opening the Android.mk and I can see why "LOCAL_SRC_FILES := AppDelegate.cpp \ HelloWorldScene.cpp" are the only files linked. Should I manually modify the make file or can it be automated by some way I don't know? Thank you. UPDATE: I manually added all files and headers to the make file and I get errors linking Box2D or cocosdenshion libraries.

    Read the article

  • Trying to detect collision between two polygons using Separating Axis Theorem

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(vertices[0]); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(vertices[i]); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if( other.x <= y || other.y >= x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >