Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 481/1016 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Shader inputs in a general purpouse engine

    - by dreta
    I'm not familiar with SDKs like Unity or UDK that much, so i can't check this off hand. Do general purpouse engines allow users to create custom uniform variables? The way i see it, and the way i have implemented it in an engine i'm writing to learn 3D, is that there is a "set" of uniforms provided by the engine and if you want to write a custom shader then you utilize uniforms you need to create a wanted effect. Now, the thing is, first of all i'm not an artist, second of all, i didn't have a chance to create complex scenes yet. So my question is, is it common practice to define variables that the engine provides and only allow the user to work with what they're given? Allowing users to add custom programs and use them where they want is not hard, but i have issues imagining how you'd go about doing the same for uniforms.

    Read the article

  • how to modify shadow mapping in "3D Graphics with XNA Game Studio 4.0"?

    - by naprox
    So I've been following the tutorials from the book Sean James's "3D Graphics with XNA Game Studio 4.0", and have been doing fine until i reached the shadow mapping part. in this book it creates point lights with a Sphere model. my first Q is how to draw a directional Light with this frame work? secondly it can do shadow mapping just for one light, how can i do shadow mapping for all or parts of the lights in the game? i just want to know how to modify this codes to do the above tasks. I've followed tutorials on MSDN and some other sites and didn't got the answer. please help me, its so urgent. and if any one wants, the complete code is here: http://www.mediafire.com/?6ct11mc1g8f891h

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • Bouncing ball isssue

    - by user
    I am currently working on the 2D Bouncing ball physics that bounces the ball up and down. The physics behaviour works fine but at the end the velocity keep +3 then 0 non-stop even the ball has stopped bouncing. How should I modify the code to fix this issue? ballPos = D3DXVECTOR2( 50, 100 ); velocity = 0; accelaration = 3.0f; isBallUp = false; void GameClass::Update() { velocity += accelaration; ballPos.y += velocity; if ( ballPos.y >= 590 ) isBallUp = true; else isBallUp = false; if ( isBallUp ) { ballPos.y = 590; velocity *= -1; } // Graphics Rendering m_Graphics.BeginFrame(); ComposeFrame(); m_Graphics.EndFrame(); }

    Read the article

  • How to set density for each shape in PhysX 3.1

    - by hywei
    I'm using PhysX 3.1 as my game's physics engine. One requirement is that I need set different density for each shape(there are server shapes for my single rigid actor). I know that the shape's density can be set by NxShapeDesc::density in PhysX 2.8, but I can't find such interface in PhysX 3.1. I know that the mass properties can be set in PhysX 3.1 just as the snowman example in the SDK, but I don't know whether there exists a direct interface to set density for each shape.

    Read the article

  • What are effective marketing strategies for iPhone games?

    - by Artemix
    So, long story short, some days ago I published an iPhone game, I think the game wasn't that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and that's all? Somehow I think that even if you have an awesome game if you don't do that "marketing magic" correctly you will not exist in the store. Now I'm making a second game, completely different, and I want to know how to do things right. If anyone knows something about this topic, let me know.

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • What game systems exist which uses camera input?

    - by Marc Pilgaard
    The group and I is in the middle of a semester project where we are currently researching on which game systems are using camera as input or as an interactive medium? We would like some help listing some of the game systems which uses camera input, as it seems hard to find other examples. Currently we know that webcam browser games uses camera input (Newgrounds webcam games), as well as the xbox kinect. I know this questions seems rather vague, though I still hope some people is capable of helping.

    Read the article

  • Choosing a mobile advertising mediator over going it alone

    - by Notbad
    We have finished our first game for IOS/Android. We would like to give it away adding ads to it. I have been reading a lot about the subject but it is a bit overwhelming for starters. From what I read, it seems there are some important points to have into consideration: 1) Do as much localization as you can (target your audience with ads they could be interested for the zone they live in). 2) Do not over advertise in your application. At this moment we have decided to go with AdMob. It seems an easy option to setup for beginners and have a good set of ad networks. My question is, will we earn less for example for iAds using adMob than implementing iAds without a mediator? Are adMob paying less than others (this is what I remember for some artilces I read)?. It would be nice to hear from people with experience on this to let us light our way a bit.

    Read the article

  • Java - Draw Cards and Eliminate Cards Problem

    - by Jen
    I am having a problem in this question. I want a system inside a game wherein the player draws 2 cards randomly, and the enemy draws 2 cards randomly. Then, what the program does is to print out to the console the cards the player draw and the enemy's. The cards should not conflict and must not be the same. Then lastly, the program prints out the card that was not drawn by both the player and the enemy. Here's how I did it but it was lengthy and full of errors: import java.util.Random; public class Draw { public static Random random = new Random(); public static String cards[] = {"Hall", "Kitchen", "Billiard", "Study", "Pool"}; public static int playercounter; public static int enemycounter; public static String playercardA = null; public static String playercardB = null; public static String enemycardA = null; public static String enemycardB = null; public String lastcard = null; public static void playercardAdraw() { playercounter = random.nextInt(5); playercardA = cards[playercounter]; } public static void playercardBdraw() { playercounter=random.nextInt(5); playercardB= cards[playercounter]; if (playercardB==playercardA || playercardB == enemycardA || playercardB == enemycardB) { return; } } public static void enemycardAdraw () { enemycounter = random.nextInt(5); enemycardA=cards[enemycounter]; if (enemycardA == playercardA || enemycardA == playercardB) { return; } } public static void enemycardBdraw () { enemycounter = random.nextInt(5); enemycardB=cards[enemycounter]; if (enemycardB == playercardA || enemycardB == playercardB || enemycardB == enemycardA) { return; } } public static void main (String args []) { System.out.println("Starting to draw..."); System.out.println("Player's Turn: "); playercardAdraw(); System.out.println("Player's first card: " + playercardA); playercardBdraw(); System.out.println("Player's second card: " + playercardB); System.out.println("Enemy's Turn: "); enemycardAdraw(); System.out.println("Enemy's first card: " + enemycardA); enemycardBdraw(); System.out.println("Enemy's Second card: " + enemycardB); } }

    Read the article

  • How do I efficiently generate chunks to fill entire screen when my player moves?

    - by Trixmix
    In my game I generate chunks when the player moves. The chunks are all generated on the fly, but currently I just created a simple flat 8X8 floor. What happens is that when he moves to a new chunk the chunk in the direction of the player gets generated and its neighboring chunks. This is not efficient because the generator does not fill the entire screen. I did try to use recursion but its not as fast as I would like it to be. My question is what would be an efficient way of doing so? How does minecraft do so? When I say this I mean just the way it PICKS which chunks to generate and in what order. Not how they generate or how they are saved in regions, just the order/way it generates them. I just want to know what is a good way to load chunks around the player.

    Read the article

  • Solution for lightweight LAN peer discovering?

    - by DevilWithin
    I built a library for purely cross-platform programming. My games made with it run fine in Android , Pc, Linux, Mac etc. The networking capabilities are provided by ENET library, therefore all communication between my apps is not TCP or UDP compatible, but only in the custom protocol, even tough its based on the UDP ultimately. I don't think its possible to do what i want with ENET, thats why I ask here for help! Lets say I have the same game running in my Android phone, my laptop and my pc. They are all in the same wifi network, and therefore in a LAN, whether its Wifi hotspot(?) or the household router. I need each of those 3 peers to discover the other two in the network. This is meant only to find the IP of alive apps in the LAN network, to be able to host multiplayer games between them. I can only think of one effective way to do this, UDP broadcast, wait responses, but if that is the solution, i need something small, since its the only purpose of the implementation. Other way could be to try to connect to all IPs in the LAN address subrange, but I don't think the OS would be with me on this one :p Sorry for the long question!

    Read the article

  • Rotate camera around player and set new forward directions

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. When I look at the back of the player and press forward, player goes forward. Then I rotate 360 around the player and "forward direction" is tilted for 90 degrees. So every 360 turn there is 90 degrees of direction change. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". I have Player object with Camera as child object. Camera object has Camera script. Inside Camera script there are Player and Camera classes. Player object itself, has Input Controller. Also I'm making this script for joystick/ controller primarily. My camera script so far: using UnityEngine; using System.Collections; public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 10, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • How exactly to implement multiple threads in a game

    - by xerwin
    So I recently started learning Java, and having a interest in playing games as well as developing them, naturally I want to create game in Java. I have experience with games in C# and C++ but all of them were single-threaded simple games. But now, I learned how easy it is to make threads in Java, I want to take things to the next level. I started thinking about how would I actually implement threading in a game. I read couple of articles that say the same thing "Usually you have thread for rendering, for updating game logic, for AI, ..." but I haven't (or didn't look hard enough) found example of implementation. My idea how to make implementation is something like this (example for AI) public class AIThread implements Runnable{ private List<AI> ai; private Player player; /*...*/ public void run() { for (int i = 0; i < ai.size(); i++){ ai.get(i).update(player); } Thread.sleep(/* sleep until the next game "tick" */); } } I think this could work. If I also had a rendering and updating thread list of AI in both those threads, since I need to draw the AI and I need to calculate the logic between player and AI(But that could be moved to AIThread, but as an example) . Coming from C++ I'm used to do thing elegantly and efficiently, and this seems like neither of those. So what would be the correct way to handle this? Should I just keep multiple copies of resources in each thread or should I have the resources on one spot, declared with synchronized keyword? I'm afraid that could cause deadlocks, but I'm not yet qualified enough to know when a code will produce deadlock.

    Read the article

  • Determining relative velocities on impact?

    - by meds
    I'm trying to figure out a way to determine the relative velocity of a body colliding with another in a 2D environment. For example if one body is moving at (1,0) and another traveling behind it collides with it from behind at (2,0) the velocity of the impact relative to the first body was (1,0). I need a method which takes in two velocities, one velocity belonging to the body the velocity is being measured against, and the other for the impacting body and return the relative velocity.

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • How to deal with animated doors in isometric tiles

    - by George Profenza
    I've got a tricky issue I'm not sure how to tackle best: I have an animated tile of a door. When it's closed it should be sorted one way, but when it's openend it will need to be sorted a different way, as it belonging to a different(neighbouring tile). Here's the door closed: and the door opened: I imagine it would be possible to override the sorting system for such tiles and adjust the sorting based on the frame, but it feels a bit hacky. Has anyone encountered a similar scenario ? Any elegant solutions ?

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • Why do we use the Pythagorean theorem in game physics?

    - by Starkers
    I've recently learned that we use Pythagorean theorem a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagorean theorem! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagorean theorem to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagorean theorem so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • Profiling and containing memory per system

    - by chadb
    I have been interesting in profiling and keeping a managed memory pool for each subsystem, so I could get statistic on how much memory was being used in something such as sounds or graphics. However, what is the best design for doing this? I was thinking of using multiple allocators and just using one per subsystem, however, that would result in global variables for my allocators (or so it would seem to me). Another approach I have seen/been suggested is to just overload new and pass in an allocator for a parameter. I had a similar question over on stackoverflow here with a bounty, however, it seems as if perhaps I was too vague or just there is not enough people with knowledge in the subject.

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >