Search Results

Search found 28031 results on 1122 pages for 'personal development'.

Page 394/1122 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • How to read BC4 texture in GLSL?

    - by Question
    I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL_COMPRESSED_RED_RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do : uniform sampler2D TextureUnit; void main() { vec4 TexColor = texture2D(TextureUnit, vec2(gl_TexCoord[0])); (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ?

    Read the article

  • Level Creating Help

    - by Brandon oubiub
    I am making a little 2d overhead RPG type game just for fun. I have almost all the basic stuff set up, but I just need a little help on level creation. I can already make a level and place each tile how I want it, but having to place each tile gets annoying after a while. I noticed that in a lot of games, even extremely simple ones, they have LOTS of levels with LOTS of tiles in each. Creating all that in this fashion would take forever. So I guess my question is, as a game developer, am I supposed to do all that, or maybe make a little level editor so I can see things as I create it? What do game developers do? I'm using Java. EDIT: Okay, say if I had an image for a map, that I made in MS paint or photoshop, and each pixel represent a tile value, could I somehow in Java detect what color an individual pixel is? If so, that would be perfect. If so, how?

    Read the article

  • How to integrate game logic in game engines

    - by MahanGM
    Recently I'm working on a 2d game engine example in .Net with C#. My main problem is that I can't figure out how I should include the game logic within the game. Currently I have a base engine which is a set of classes that they are running sub-systems like Render, Sound, Input and Core functionality. There is an editor which helps the user to add resources, build levels, write scripts and other stuffs. I came up with an idea to use Reflection and CSharpCodeProvider from my editor to compile the written code. This way I can get an executable of my product too. This way is quite well but I would like to know what's really the solution and architecture to do this. My engine's role is 2d platform. The scripting language is C# right now because I can't consist any other embeddable language for now. The game needs compilation and CSharpCodeProvider is the only way for me to do it meantime.

    Read the article

  • XAudio2 - Multiple instances of the same sound

    - by Boreal
    Right now, I'm adding a rudimentary sound engine to my game. So far, I am able to load in a WAV file and play it once, then free up the memory when I close the game. However, the game crashes with a nice ArgumentOutOfBoundsException when I try to play another sound instance. Specified argument was out of the range of valid values. Parameter name: readLength I'm following this tutorial pretty much exactly, but I still keep getting the aforementioned error. Here's my sound-related code. http://pastebin.com/FgaqfXTs The exception occurs on line 156 when I am playing the sound: source.SubmitSourceBuffer(buffer);

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • How should I share variables between instances/classes?

    - by tesselode
    I'm making a game using LOVE, so everything is programmed in Lua. I've been experimenting with using classes and object orientation recently. I've found out that a nice system to use is having most of the game's code in different classes, and having a table of instances with all of the instances of any class in it. This way, I can go through every instance of every class and update and draw it by calling the same function. There is a problem, though. Let's say I have an instance of a player with variables for health and recharge time of a weapon. I also have a master instance which is responsible for drawing the HUD. How can I tell the master instance what the player's health is? Bad solutions: Assuming that the player instance will always have the same position in the table - that can be easily changed. Using global variables. Global variables are evil. Have the master instance outside of the instances table, and have the player set variables inside the master instance, which it then uses for HUD drawing. This is really bad because now I have to make a duplicate of every variable the master instance needs. What is the proper, standard way of sharing variables between instances? Do I need to change the way I keep track of instances?

    Read the article

  • Why I'm getting the same result when deleting target?

    - by XNA
    In the following code we use target in the function: moon.mouseEnabled = false; sky0.addChild(moon); addEventListener(MouseEvent.MOUSE_DOWN, onDrag, false, 0, true); addEventListener(MouseEvent.MOUSE_UP, onDrop, false, 0, true); function onDrag(evt:MouseEvent):void { evt.target.addChild(moon); evt.target.startDrag(); } function onDrop(evt:MouseEvent):void { stopDrag(); } But if I rewrite this code without evt.target it still work. So what is the difference, am I going to get errors later in the run time because I didn't put target? If not then why some use target a lot while it works without it. function onDrag(evt:MouseEvent):void { addChild(moon); startDrag(); }

    Read the article

  • Is it possible to extract textures or sprites from compiled game files?

    - by Brian Reindel
    For instance, every map in Portal has what appear to be sprites over a texture indicating the obstacles you'll face (see screenshot). Are these resources compiled into the source as byte code, or is it possible to extract them from installation files? Obviously I understand copyright implications, and I am only interested in using it for a recreational project. Instead of recreating them, I wonder if they can be extracted.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • Unity3d: calculate the result of a transform without modifying transform object itself

    - by Heisenbug
    I'm in the following situation: I need to move an object in some way, basically rotating it around its parent local position, or translating it in its parent local space (I know how to do this). The amount of rotation and translation is know at runtime (it depends on several factors, the speed of the object, enviroment factors, etc..). The problem is the following: I can perform this transformation only if the result position of the transformed object fit some criterias. An example could be this: the distance between the position before and after the transformation must be less than a given threshold. (Actually the conditions could be several and more complex) The problem is that if I use Transform.Rotate and Transform.Translate methods of my GameObject, I will loose the original Transform values. I think I can't copy the original Transform using instantiate for performance issues. How can I perform such a task? I think I have more or less 2 possibilities: First Don't modify the GameObject position through Transform. Calculate which will be the position after the transform. If the position is legal, modify transform through Translate and Rotate methods Second Store the original transform someway. Transform the object using Translate and Rotate. If the transformed position is illegal, restore the original one.

    Read the article

  • I can't figure out how to animate my loaded model with Assimp

    - by Brendan Webster
    I have loaded in a model to my C++ OpenGL game. It is a COLLADA file type that I have loaded, and I setup an animation under blender for the file. The problem is I don't know how to animate the model. The Assimp documentation didn't really help me out. Their source code didn't use animations, and I can't seem to find anywhere online that someone explains how to animate your loaded model... I'm sorta wondering if someone could link me to a helpful website, or maybe just help me out, so that maybe I will understand how to do animations with assimp.

    Read the article

  • Collision detection of convex shapes on voxel terrain

    - by Dave
    I have some standard convex shapes (cubes, capsules) on a voxel terrain. It is very easy to detect single vertex collisions. However, it becomes computationally expensive when many vertices are involved. To clarify, currently my algorithm represents a cube as multiple vertices covering every face of the cube, not just the corners. This is because the cubes can be much bigger than the voxels, so multiple sample points (vertices) are required (the distance between sample points must be at least the width of a voxel). This very rapidly becomes intractable. It would be great if there were some standard algorithm(s) for collision detection between convex shapes and arbitrary voxel based terrain (like there is with OBB's and seperating axis theorem etc). Any help much appreciated.

    Read the article

  • Limiting game loop to exactly 60 tics per second (Android / Java)

    - by user22241
    So I'm having terrible problems with stuttering sprites. My rendering and logic takes less than a game tic (16.6667ms) However, although my game loop runs most of the time at 60 ticks per second, it sometimes goes up to 61 - when this happens, the sprites stutter. Currently, my variables used are: //Game updates per second final int ticksPerSecond = 60; //Amount of time each update should take final int skipTicks = (1000 / ticksPerSecond); This is my current game loop @Override public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub //This method will run continuously //You should call both 'render' and 'update' methods from here //Set curTime initial value if '0' //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); //Time correction to compensate for the missing .6667ms when using int values nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; //Increase loops loops++; } render(); } I realise that my skipTicks is an int and therefore will come out as 16 rather that 16.6667 However, I tried changing it (and ticksPerSecond) to Longs but got the same problem). I also tried to change the timer used to Nanotime and skiptics to 1000000000/ticksPerSecond, but everything just ran at about 300 ticks per seconds. All I'm attempting to do is to limit my game loop to 60 - what is the best way to guarantee that my game updates never happen at more than 60 times a second? Please note, I do realise that very very old devices might not be able to handle 60 although I really don't expect this to happen - I've tested it on the lowest device I have and it easily achieves 60 tics. So I'm not worried about a device not being able to handle the 60 ticks per second, but rather need to limit it - any help would be appreciated.

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Implementing 2D CSG (for collision shapes)?

    - by bluescrn
    Are there any simple (or well documented) algorithms for basic CSG operations on 2D polygons? I'm looking for a way to 'add' a number of overlapping 2D collision shapes. These may be convex or concave, but will be closed shapes, defined as a set of line segments, with no self-intersections. The use of this would be to construct a clean set of collision edges, for use with a 2D physics engine, from a scene consisting of many arbitrarily placed (and frequently overlapping) objects, each with their own collision shape. To begin with, I only need to 'add' shapes, but the ability to 'subtract', to create holes, may also be useful.

    Read the article

  • 2D Skeletal Animation Transformations

    - by Brad Zeis
    I have been trying to build a 2D skeletal animation system for a while, and I believe that I'm fairly close to finishing. Currently, I have the following data structures: struct Bone { Bone *parent; int child_count; Bone **children; double x, y; }; struct Vertex { double x, y; int bone_count; Bone **bones; double *weights; }; struct Mesh { int vertex_count; Vertex **vertices; Vertex **tex_coords; } Bone->x and Bone->y are the coordinates of the end point of the Bone. The starting point is given by (bone->parent->x, bone->parent->y) or (0, 0). Each entity in the game has a Mesh, and Mesh->vertices is used as the bounding area for the entity. Mesh->tex_coords are texture coordinates. In the entity's update function, the position of the Bone is used to change the coordinates of the Vertices that are bound to it. Currently what I have is: void Mesh_update(Mesh *mesh) { int i, j; double sx, sy; for (i = 0; i < vertex_count; i++) { if (mesh->vertices[i]->bone_count == 0) { continue; } sx, sy = 0; for (j = 0; j < mesh->vertices[i]->bone_count; j++) { sx += (/* ??? */) * mesh->vertices[i]->weights[j]; sy += (/* ??? */) * mesh->vertices[i]->weights[j]; } mesh->vertices[i]->x = sx; mesh->vertices[i]->y = sy; } } I think I have everything I need, I just don't know how to apply the transformations to the final mesh coordinates. What tranformations do I need here? Or is my approach just completely wrong?

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • How to determine character's foot contact point on a uniform triangle mesh terrain?

    - by xenon
    For a terrain that is modelled by a heightmap with a uniform triangle mesh, what are some techniques I could use to determine the contact point of the foot of a character standing on the terrain? Since the terrain's Y values are altered by the heightmap, they won't be flat any more. As the character moves on the terrain, it has to know at which values of Y-value its foot should be. Conceptually, what are some methods and techniques to determine the contact point of the character's foot standing on the terrain?

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • Grid-Based 2D Lighting Problems

    - by Lemoncreme
    I am aware this question has been asked before, but unfortunately I am new to the language, so the complicated explanations I've found do not help me in the least. I need a lighting engine for my game, and I've tried some procedural lighting systems. This method works the best: if (light[xx - 1, yy] > light[xx, yy]) light[xx, yy] = light[xx - 1, yy] - lightPass; if (light[xx, yy - 1] > light[xx, yy]) light[xx, yy] = light[xx, yy - 1] - lightPass; if (light[xx + 1, yy] > light[xx, yy]) light[xx, yy] = light[xx + 1, yy] - lightPass; if (light[xx, yy + 1] > light[xx, yy]) light[xx, yy] = light[xx, yy + 1] - lightPass; (Subtracts adjacent values by 'lightPass' variable if they are more bright) (It's in a for() loop) This is all fine and dandy except for a an obvious reason: The system favors whatever comes first in the for() loop This is what the above code looks like applied to my game: If I could get some help on creating a new procedural or otherwise lighting system I would really appreciate it!

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • convert image to spritesheet of tiles for isometric map?

    - by Paul
    is there a way to convert an isometric image (like the first image) to a spritesheet (like the second image), in order to place each image on the isometric map with the code? The map looks like the first image, but some buildings are bigger than just one tile, so I need several squares (let's say the first image is a building, made of multiple tiles with different colors), and each square is placed with an offset of 64x32. The building is created in Blender and I save the image with the isometric perspective. But I have to split each square from this image in order to have the spritesheet, maybe there is smarter way, or a java software that would make the conversion for me?

    Read the article

  • Rotate/Translate object in local space

    - by Mathias Hölzl
    I am just trying to create a movementcontroller class for game entities. These class should transform the entity affected by the mouse and keyboard input. I am able to calculate the changed rotation and the new globalPosition. Then I multiply: newGlobalMatrix = changedRotationMatrix * oldGlobalMatrix; newGlobalMatrix = MatrixSetPosition(newPosition); The problem is that the object rotates around the global axis and not around the local axis. I use XNAMath for the matrix calculation.

    Read the article

  • When should a bullet texture be loaded in XNA?

    - by Bill
    I'm making a SpaceWar!-esque game using XNA. I want to limit my ships to 5 active bullets at any time. I have a Bullet DrawableGameComponent and a Ship DrawableGameComponent. My Ship has an array of 5 Bullet. What is the best way to manage the Bullet textures? Specifically, when should I be calling LoadTexture? Right now, my solution is to populate the Bullet array in the Ship's constructor, with LoadTexture being called in the Bullet constructor. The Bullet objects will be disabled/not visible except when they are active. Does the texture really need to be loaded once for each individual instance of the bullet object? This seems like a very processor-intensive operation. Note: This is a small-scale project, so I'm OK with not implementing a huge texture-management framework since there won't be more than half a dozen or so in the entire game. I'd still like to hear about scalable solutions for future applications, though.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >