Search Results

Search found 32375 results on 1295 pages for 'dnn module development'.

Page 487/1295 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Handling Players, enemies and attacks in HTML5

    - by Chris Morris
    I'm building a simple (currently) game with free roaming player and monsters on a map built by a 2D grid. I've been looking at the methods for implementing characters and enemies onto the screen and I've seen two seperate methods for doing this online. Drawing the player onto the screen canvas directly and refreshing the entire screen every FPS tick. Having a separate canvas to handle the player and moving the player canvas on top of the screen canvas via absolute positioning. I can see some pros and cons of both methods but what is generally the best method for doing this? I assume the second due to not having to drain resources by refreshing the map when the user is not moving, but the type of game will generally have constant movement.

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • Unity - Invert Movement Direction

    - by m41n
    I am currently developing a 2,5D Sidescroller in Unity (just starting to get to know it). Right now I added a turn-script to have my character face the appropriate direction of movement, though something with the movement itself is behaving oddly now. When I press the right arrow key, the character moves and faces towards the right. If I press the left arrow key, the character faces towards the left, but "moon-walks" to the right. I allready had enough trouble getting the turning to work, so what I am trying is to find a simple solution, if possible without too much reworking of the rest of my project. I was thinking of just inverting the movement direction for a specific input-key/facing-direction. So if anyone knows how to do something like that, I'd be thankful for the help. If it helps, the following is the current part of my "AnimationChooser" script to handle the turning: Quaternion targetf = Quaternion.Euler(0, 270, 0); // Vector3 Direction when facing frontway Quaternion targetb = Quaternion.Euler(0, 90, 0); // Vector3 Direction when facing opposite way if (Input.GetAxisRaw ("Vertical") < 0.0f) // if input is lower than 0 turn to targetf { transform.rotation = Quaternion.Lerp(transform.rotation, targetf, Time.deltaTime * smooth); } if (Input.GetAxisRaw ("Vertical") > 0.0f) // if input is higher than 0 turn to targetb { transform.rotation = Quaternion.Lerp(transform.rotation, targetb, Time.deltaTime * smooth); } The Values (270 and 90) and Axis are because I had to turn my model itself in the very first place to face towards any of the movement directions.

    Read the article

  • What program should i use for Ludum Dare?

    - by mFontoura
    I want to participate for the first time on Ludum Dare, but i'm not confortable yet with a language to pick one for making a game on a weekend. So i was looking for a program 'gamemaker' style, just to make something for LD. I was going for Construct 2, but i use Linux and they don't have a linux version. So the alternative i use is Stencyl, witch is great and probably is what i'm going to use. However, i wanted to know if there is something similar and better for Linux. Also, if i get a computer with Win8, is it worth the trouble for Construct 2?

    Read the article

  • How to display a hierarchical skill tree in php

    - by user3587554
    If I have skill data set up in a tree format (where earlier skills are prerequisites for later ones), how would I display it as a tree, using php? The parent would be on top and have 3 children. Each of these children can then have one more child so its parent would be directly above it. I'm having trouble figuring out how to add the root element in the middle of the top div, and the child of the children below each child of the root. I'm not looking for code, but an explanation of how to do it. My data in array form is this: Data: Array ( [1] => Array ( [id] => 1 [title] => Jutsu [description] => Skill that makes you awesomer at using ninjutsu [tiers] => 1 [prereq] => [image] => images/skills/jutsu.png [children] => Array ( [2] => Array ( [id] => 2 [title] => fireball [description] => Increase your damage with fire jutsu and weapons [tiers] => 5 [prereq] => 1 [image] => images/skills/fireball.png [children] => Array ( [5] => Array ( [id] => 5 [title] => pin point [description] => Increases jutsu accuracy [tiers] => 5 [prereq] => 2 [image] => images/skills/pinpoint.png ) ) ) [3] => Array ( [id] => 3 [title] => synergy [description] => Reduce the amount of chakra needed to use ninjutsu [tiers] => 1 [prereq] => 1 [image] => images/skills/synergy.png ) [4] => Array ( [id] => 4 [title] => ebb & flow [description] => Increase the damage of water jutsu, water weapons, and reduce the damage of jutsu and weapons that use water element [tiers] => 5 [prereq] => 1 [image] => images/skills/ebbandflow.png [children] => Array ( [6] => Array ( [id] => 6 [title] => IQ [description] => Decrease the time it takes to learn a jutsu [tiers] => 5 [prereq] => 4 [image] => images/skills/iq.png ) ) ) ) ) ) An example would be this demo image minus the hover stuff.

    Read the article

  • How can I factor momentum into my space sim?

    - by Josh Petite
    I am trying my hand at creating a simple 2d physics engine right now, and I'm running into some problems figuring out how to incorporate momentum into movement of a spaceship. If I am moving in a given direction at a certain velocity, I am able to currently update the position of my ship easily (Position += Direction * Velocity). However, if the ship rotates at all, and I recalculate the direction (based on the new angle the ship is facing), and accelerate in that direction, how can I take momentum into account to alter the "line" that the ship travels? Currently the ship changes direction instantaneously and continues at its current velocity in that new direction when I press the thrust button. I want it to be a more gradual turning motion so as to give the impression that the ship itself has some mass. If there is already a nice post on this topic I apologize, but nothing came up in my searches. Let me know if any more information is needed, but I'm hoping someone can easily tell me how I can throw mass * velocity into my game loop update.

    Read the article

  • What's the best game engine to use for my PC game project? [closed]

    - by user19860
    I'm in the planning phase of creating an action-rpg for the PC, and I'd like to create a League of Legends style look for the game (animated/cartoony). Any idea which engine best replicates this look? I ask because when I look at a lot of the UDk/Unreal games, they've all got the more realistic 3-D look that I'd like to avoid, so I was wondering if an alternate look was possible on that type of engine. Source SDK and Unity also look very interesting, I just don't know what types of visual capabilities these engines have. Thanks in advance.

    Read the article

  • what is the best way to use loops to detect events while the main loop is running?

    - by yao jiang
    I am making an "game" that has pathfinding using pygame. I am using Astar algo. I have a main loop which draws the whole map. In the loop I check for events. If user press "enter" or "space", random start and end are selected, then animation starts and it will try to get from start to end. My draw function is stupid as hell right now, it works as expected but I feel that I am doing it wrong. It'll draw everything to the end of the animation. I am also detecting events in there as well. What is a better way of implementing the draw function such that it will draw one "step" at a time while checking for events? animating = False; while loop: check events: if not animating: # space or enter press will choose random start/end coords if enter_pressed or space_pressed: start, end = choose_coords route = find_route(start, end) draw(start, end, grid, route) else: # left click == generate an event to block the path # right click == user can choose a new destination if left_mouse_click: gen_event() reroute() elif right_mouse_click: new_end = new_end() new_start = current_pos() route = find_route(new_start, new_end) draw(new_start, new_end, grid, route) # draw out the grid def draw(start, end, grid, route_coord): # draw the end coords color = red; pick_image(screen, color, width*end[1],height*end[0]); pygame.display.flip(); # then draw the rest of the route for i in range(len(route_coord)): # pausing because we want animation time.sleep(speed); # get the x/y coords x,y = route_coord[i]; event_on = False; if grid[x][y] == 2: color = green; elif grid[x][y] == 3: color = blue; for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: if event.button == 3: print "destination change detected, rerouting"; # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; grid[r][c] = 4; end = [r, c]; elif event.button == 1: print "user generated event"; pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # mark it as a block for now grid[r][c] = 1; event_on = True; if check_events([x,y]) or event_on: # there is an event # mark it as a block for now grid[y][x] = 1; pick_image(screen, event_x, width*y, height*x); pygame.display.flip(); # then find a new route new_start = route_coord[i-1]; marked_grid, route_coord = find_route(new_start, end, grid); draw(new_start, end, grid, route_coord); return; # just end draw here so it wont throw the "index out of range" error elif grid[x][y] == 4: color = red; pick_image(screen, color, width*y, height*x); pygame.display.flip(); # clear route coord list, otherwise itll just add more unwanted coords route_coord_list[:] = [];

    Read the article

  • Moving player in direciton camera is facing

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. My problem is that wherever camera is facing, players forward is always the same direction. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". My camera script so far: using UnityEngine; using System.Collections; public class PlayerScript : MonoBehaviour { public float RotateSpeed = 150, MoveSpeed = 50; float DeltaTime; void Update() { DeltaTime = Time.deltaTime; transform.Rotate(0, Input.GetAxis("LeftX") * RotateSpeed * DeltaTime, 0); transform.Translate(0, 0, -Input.GetAxis("LeftY") * MoveSpeed * DeltaTime); } } public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 170, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Align tetrahedrons

    - by thedeadlybutter
    I'm currently generating tetrahedron meshes in Unity When a player clicks the side of a mesh, a new one spawns aligned with it, like this. I'm not sure how nor can I find any information on implementing a tetra hedron grid. I tried playing around with the vertices until I realized I need to adjust position & rotation. Any ideas? EDIT: To be clear, the second image was manually placed objects in the Unity Editor. I'm looking to make an algorithm that places the meshes correctly.

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

  • Is there any hueristic to polygonize a closed 2d-raster shape with n triangles?

    - by Arthur Wulf White
    Lets say we have a 2d image black on white that shows a closed geometric shape. Is there any (not naive brute force) algorithm that approximates that shape as closely as possible with n triangles? If you want a formal definition for as closely as possible: Approximate the shape with a polygon that when rendered into a new 2d image will match the largest number of pixels possible with the original image.

    Read the article

  • Trying to detect collision between two polygons using Separating Axis Theorem

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(vertices[0]); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(vertices[i]); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if( other.x <= y || other.y >= x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Distance between two 3D objects' faces

    - by Arthur Gibraltar
    I'm really newbie on programming and I'm making some tests. I couldn't find nowhere on Internet how could I calculate the distance between two 3D objects' faces. Is there anyway? Detailing, as an example, I have two 3D cubes. Each one has a vector3 position designating it's center on the 3D space and an orientation matrix. And each cube has a size (float width, float height and float length). I could get a simple distance between them by calling Vector3.Distance(), but it doesn't consider its sizes, just the position. Then the distance would be between its centers. Is there any way to calculate the distance between the faces? Thanks for any reply.

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • How can I apply different actions to different parts of a 2D character?

    - by Praveen Sharath
    I am developing a 2D platform game in Java. The player has a gun in his hand every time. He needs to walk and shoot with the gun(arrow keys for walk and X key to shoot). The walk cycle takes 6 frames and i am able to import the sprite sheet and animate the sequence when I press arrow key. But i need to add the gun motion. The player holds the gun upwards and when X key is pressed he brings it straight and shoots. How to implement the walk + shoot action?

    Read the article

  • Embed IF text parser in another game?

    - by DragonFax
    Are there any existing interactive fiction text parsing engines that I can embed in another game or application? I'm looking to use something as a library. I can pass it the available objects and verbs from my own side. It will parse the sentences from the user and give me back some sort of structure/AST describing what the user asked for. Then my own code can then act upon that request. I don't need something SIRI level. The simple sentences and actions that current IF games support is fine. But I'm not looking to write a whole text/sentence parser myself. This isn't an If game and I can't write it entirely in an interactive-fiction language like inform 7. Unfortunatly, I can't seem to find any examples of anyone using the text parsing capabilities of these engines without writing the entire game in that engine's language.

    Read the article

  • how to retain the animated position in opengl es 2.0

    - by Arun AC
    I am doing frame based animation for 300 frames in opengl es 2.0 I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames Then, the animated rectangle has to stay there for the next 100 frames. Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames. I am using simple linear interpolation to calculate the delta-animation value for each frame. Pseudo code: The below drawFrame() is executed for 300 times (300 frames) in a loop. float RectMVMatrix[4][4] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; // identity matrix int totalframes = 300; float translate-delta; // interpolated translation value for each frame float scale-delta; // interpolated scale value for each frame // The usual code for draw is: void drawFrame(int iCurrentFrame) { // mySetIdentity(RectMVMatrix); // comment this line to retain the animated position. mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame(). Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };// 2 pixels translation and 1.01 units scaling after first frame This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also. Eventually output is getting wrong. How to retain the animated position without affecting the current ModelView matrix? =========================================== I got the solution... Thanks to Sergio. I created separate matrices for translation and scaling. e.g.CurrentTranslateMatrix[4][4], CurrentScaleMatrix[4][4]. Then for every frame, I reset 'CurrentTranslateMatrix' to identity and call mytranslate( CurrentTranslateMatrix, translate-delta, X_AXIS) function. I reset 'CurrentScaleMatrix' to identity and call myscale(CurrentScaleMatrix, scale-delta) function. Then, I multiplied these 'CurrentTranslateMatrix' and 'CurrentScaleMatrix' to get the final 'RectMVMatrix' Matrix for the frame. Pseudo Code: float RectMVMatrix[4][4] = {0}; float CurrentTranslateMatrix[4][4] = {0}; float CurrentScaleMatrix[4][4] = {0}; int iTotalFrames = 300; int iAnimationFrames = 100; int iTranslate_X = 200.0f; // in pixels float fScale_X = 2.0f; float scaleDelta; float translateDelta_X; void DrawRect(int iTotalFrames) { mySetIdentity(RectMVMatrix); for (int i = 0; i< iTotalFrames; i++) { DrawFrame(int iCurrentFrame); } } void getInterpolatedValue(int iStartFrame, int iEndFrame, int iTotalFrame, int iCurrentFrame, float *scaleDelta, float *translateDelta_X) { float fDelta = float ( (iCurrentFrame - iStartFrame) / (iEndFrame - iStartFrame)) float fStartX = 0.0f; float fEndX = ConvertPixelsToOpenGLUnit(iTranslate_X); *translateDelta_X = fStartX + fDelta * (fEndX - fStartX); float fStartScaleX = 1.0f; float fEndScaleX = fScale_X; *scaleDelta = fStartScaleX + fDelta * (fEndScaleX - fStartScaleX); } void DrawFrame(int iCurrentFrame) { getInterpolatedValue(0, iAnimationFrames, iTotalFrames, iCurrentFrame, &scaleDelta, &translateDelta_X) mySetIdentity(CurrentTranslateMatrix); myTranslate(RectMVMatrix, translateDelta_X, X_AXIS); // to translate the mv matrix in x axis by translate-delta value mySetIdentity(CurrentScaleMatrix); myScale(RectMVMatrix, scaleDelta); // to scale the mv matrix by scale-delta value myMultiplyMatrix(RectMVMatrix, CurrentTranslateMatrix, CurrentScaleMatrix);// RectMVMatrix = CurrentTranslateMatrix*CurrentScaleMatrix; ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } I maintained this 'RectMVMatrix' value, if there is no animation for the current frame (e.g. 101th frame onwards). Thanks, Arun AC

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Protection against CheatEngine and other injectors [duplicate]

    - by Lucas
    This question already has an answer here: Strategies to Defeat Memory Editors for Cheating - Desktop Games 10 answers Is protection against CheatEngine and other inject tools are possible to do? I was thinking a day and the only one idea I've got is about writting some small application which will scan the processes running every second, and in case if any injector will be found the game client will exit immadiately. I'm writing here to see your opinions on this case as some of you may have some expierence against protecting the game clients against DLL or PYC injection or something.

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • can't spot the error. Trying to increment

    - by Kevin Jensen Petersen
    I really can't spot the error, or the misspelling. This script should increase the variable currentTime with 1 every second, as long as i am holding the Space button down. This is Unity C#. using UnityEngine; using System.Collections; public class GameTimer : MonoBehaviour { //Timer private bool isTimeDone; public GUIText counter; public int currentTime; private bool starting; //Each message will be shown random each 20 seconds. public string[] messages; public GUIText msg; //To check if this is the end private bool end; void Update () { counter.guiText.text = currentTime.ToString(); if(Input.GetKey(KeyCode.Space)) { if(starting == false) { starting = true; } if(end == false) { if(isTimeDone) { StartCoroutine(timer()); } } else { msg.guiText.text = "You think you can do better? Press 'R' to Try again!"; if(Input.GetKeyDown(KeyCode.R)) { Application.LoadLevel(Application.loadedLevel); } } } if(!Input.GetKey(KeyCode.Space) & starting) { end = true; } } IEnumerator timer() { isTimeDone = false; yield return new WaitForSeconds(1); currentTime++; isTimeDone = true; } }

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >