Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 382/1016 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Car Modelling for race game

    - by Mert Toka
    I am taking Computer Graphics course this semester and we have a video game competition. I am making racing game with simulated dynamics. Our professor told us that we don't have to do much of a modelling but since we haven't started the gaming part and since I have free time I want to model the car. My question is firstly which software do you recommend to design game components? I know Maya right now. Secondly, if I design the car or any other part, what should its polygon count in order to run game smoothly? I can design pretty much everything but I assume that it is hard to design low-poly models.

    Read the article

  • XNA GUI: Creating a 'scroll pane' widget

    - by Keith Myers
    I'm trying to create a simple GUI system and am currently stuck on how to implement a textarea with a scrollbar. In other words, the text is too large to fit into the view area. I want to learn how to do this, so I'd rather not use an already rolled API. I believe this could be done if the text were part of a texture, but if the game had a lot of unique dialog, this seems expensive. I researched creating a texture on the fly and writing to it, but came up with nothing. Any suggested strategies would be appreciated. I believe it boils down to: text in a texture and how? Or something I have not thought of...

    Read the article

  • Migration from XNA to SharpDX

    - by Wouter
    My fear is that XNA has reached the end of the road. To keep up with the latest technology a shift to another game framework might be needed. We have many games in a large codebase, all based on XNA. My question is, how much work would it be to migrate to SharpDX and are there other possibilities? Our code base mainly uses basic 3D rendering and the SpriteBatch, no fancy shader stuff. Update: I should have mentioned we only use 2.5D, we have a simple engine that builds textured quads to render text and animated sprites. Also for sound we use XACT (what else..) with some effects.

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

  • What is a technique for 2D ray-box intersection that is suitable for old console hardware?

    - by DJCouchyCouch
    I'm working on a Sega Genesis homebrew game (it has a 7mhz 68000 CPU). I'm looking for a way to find the intersection between a particle sprite and a background tile. Particles are represented as a point with a movement vector. Background tiles are 8 x 8 pixels, with an (X,Y) position that is always located at a multiple of 8. So, really, I need to find the intersection point for a ray-box collision; I need to find out where along the edge of the tile the ray/particle hits. I have these two hard constraints: I'm working with pixel locations (integers). Floating point is too expensive. It doesn't have to be super exact, just close enough. Multiplications, divisions, dot products, et cetera, are incredibly expensive and are to be avoided. So I'm looking for an efficient algorithm that would fit those constraints. Any ideas? I'm writing it in C, so that would work, but assembly should be good as well.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Implement 2x speed in tower of defense type game

    - by Siddharth
    I was currently developing tower of defense game and I want to implement 2x feature for my game. Game usually run with 1x speed that was normal speed of the game. Here what 1x and 2x mean : 1x - mention normal speed of the game, 2x - mention the game object moves with double speed means user experience the fast game play. I want to implement such functionality for my game. The functionality that I want contains in the game Medieval Castle game that was available in the market. https://play.google.com/store/apps/details?id=com.nova.root&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5ub3ZhLnJvb3QiXQ.. The screen shot also shows the 1x and 2x button in that game. I think for 2x speed of the game I have to increase the speed of each object that were in the game. So any member please help what to do for that implementation. Only idea become enough for me.

    Read the article

  • Electronic circuit simulator four-way flood-filling issues

    - by AJ Weeks
    I've made an electronic circuit board simulator which has simply 3 types of tiles: wires, power sources, and inverters. Wires connect to anything they touch, other than the sides of inverters; inverters have one input side and one output side; and finally power tiles connect in a similar manner as wires. In the case of an infinite loop, caused by the output of the inverter feeding into its input, I want inverters to oscillate (quickly turn on/off). I've attempted to implement a FloodFill algorithm to spread the power throughout the grid, but seem to have gotten something wrong, as only the tiles above the power source get powered (as seen below) I've attempted to debug the program, but have had no luck thus far. My code concerning the updating of power can be seen here.

    Read the article

  • Unity3d vector and matrix operations

    - by brandon
    I have the following three vectors: posA: (1,2,3) normal: (0,1,0) offset: (2,3,1) I want to get the vector representing the position which is offset in the direction of the normal from posA. I know how to do this by cheating (not using matrix operations): Vector3 result = new Vector3(posA.x + normal.x*offset.x posA.y + normal.y*offset.y, posA.z + normal.z*offset.z); I know how to do this mathematically Note: [] indicates a column vector, {} indicates a row vector result = [1,2,3] + {2,3,1}*{[0,0,0],[0,1,0],[0,0,0]} What I don't know is which is better to use and if it's the latter how do I do this in unity? I only know of 4x4 matrices in unity. I don't like the first option because you are instantiating a new vector instead of just modifying the original. Suggestions? Note: by asking which is better, I am asking for a quantifiable reason, not just a preference.

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Arrive steering behavior

    - by dbostream
    I bought a book called Programming game AI by example and I am trying to implement the arrive steering behavior. The problem I am having is that my objects oscillate around the target position; after oscillating less and less for awhile they finally come to a stop at the target position. Does anyone have any idea why this oscillating behavior occur? Since the examples accompanying the book are written in C++ I had to rewrite the code into C#. Below is the relevant parts of the steering behavior: private enum Deceleration { Fast = 1, Normal = 2, Slow = 3 } public MovingEntity Entity { get; private set; } public Vector2 SteeringForce { get; private set; } public Vector2 Target { get; set; } public Vector2 Calculate() { SteeringForce.Zero(); SteeringForce = SumForces(); SteeringForce.Truncate(Entity.MaxForce); return SteeringForce; } private Vector2 SumForces() { Vector2 force = new Vector2(); if (Activated(BehaviorTypes.Arrive)) { force += Arrive(Target, Deceleration.Slow); if (!AccumulateForce(force)) return SteeringForce; } return SteeringForce; } private Vector2 Arrive(Vector2 target, Deceleration deceleration) { Vector2 toTarget = target - Entity.Position; double distance = toTarget.Length(); if (distance > 0) { //because Deceleration is enumerated as an int, this value is required //to provide fine tweaking of the deceleration.. double decelerationTweaker = 0.3; double speed = distance / ((double)deceleration * decelerationTweaker); speed = Math.Min(speed, Entity.MaxSpeed); Vector2 desiredVelocity = toTarget * speed / distance; return desiredVelocity - Entity.Velocity; } return new Vector2(); } private bool AccumulateForce(Vector2 forceToAdd) { double magnitudeRemaining = Entity.MaxForce - SteeringForce.Length(); if (magnitudeRemaining <= 0) return false; double magnitudeToAdd = forceToAdd.Length(); if (magnitudeToAdd > magnitudeRemaining) magnitudeToAdd = magnitudeRemaining; SteeringForce += Vector2.NormalizeRet(forceToAdd) * magnitudeToAdd; return true; } This is the update method of my objects: public void Update(double deltaTime) { Vector2 steeringForce = Steering.Calculate(); Vector2 acceleration = steeringForce / Mass; Velocity = Velocity + acceleration * deltaTime; Velocity.Truncate(MaxSpeed); Position = Position + Velocity * deltaTime; } If you want to see the problem with your own eyes you can download a minimal example here. Thanks in advance.

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • How does a segment based rendering engine work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • How to load with "simplegame" mode after cooking?

    - by Emran Bayati
    i got a problem after i cook my game in Frontend (udk) ! Every thing is ok during the Cook operation i mean i compiled my scripts,it's ok there's no problem in this step,and i cooked my packages it's ok too ! no problem just like last step. so before i package the whole game;i'll lunch it and see if every thing is ok ! but when i load the game it'll load whit the Unreal tournament default game type ! i mean i don't want it to load with this type ! i want it to load with the "simplegame" game type ! i need to say that i set game type to "simplegame" in the Editor from View world Properties Game Type! but still it's loading with Ut game mode when i cook the game ! I just want load my game in "Simplegame" mode after cooked. if any one can help ! plz help ! Tnx Alot Emran Bayati

    Read the article

  • What makes game sound effects "good"?

    - by you786
    I'm making a small game, and I've found some free sound effects that I'd like to use. The issue is that I can't get the sound effects to sound like they "belong" in my game. I don't know what to look for that can make sound effects match the rest of my game style. I have some ideas on what affects the meshing of audio with graphics. For example, I have a feeling that the current SFX I may be too "realistic" for my graphical style, which is pretty cartoon-like. Also, is there a golden standard for what volume various SFX should be at? (for example, I am thinking that footsteps or other common sounds should be at barely audible volumes, while enemy deaths or something that is a "big deal" should be louder). I found a similar question about graphics, I'm looking for a similar response with sound effects.

    Read the article

  • First Person Camera strafing at angle

    - by Linkandzelda
    I have a simple camera class working in directx 11 allowing moving forward and rotating left and right. I'm trying to implement strafing into it but having some problems. The strafing works when there's no camera rotation, so when the camera starts at 0, 0, 0. But after rotating the camera in either direction it seems to strafe at an angle or inverted or just some odd stuff. Here is a video uploaded to Dropbox showing this behavior. https://dl.dropboxusercontent.com/u/2873587/IncorrectStrafing.mp4 And here is my camera class. I have a hunch that it's related to the calculation for camera position. I tried various different calculations in strafe and they all seem to follow the same pattern and same behavior. Also the m_camera_rotation represents the Y rotation, as pitching isn't implemented yet. #include "camera.h" camera::camera(float x, float y, float z, float initial_rotation) { m_x = x; m_y = y; m_z = z; m_camera_rotation = initial_rotation; updateDXZ(); } camera::~camera(void) { } void camera::updateDXZ() { m_dx = sin(m_camera_rotation * (XM_PI/180.0)); m_dz = cos(m_camera_rotation * (XM_PI/180.0)); } void camera::Rotate(float amount) { m_camera_rotation += amount; updateDXZ(); } void camera::Forward(float step) { m_x += step * m_dx; m_z += step * m_dz; } void camera::strafe(float amount) { float yaw = (XM_PI/180.0) * m_camera_rotation; m_x += cosf( yaw ) * amount; m_z += sinf( yaw ) * amount; } XMMATRIX camera::getViewMatrix() { updatePosition(); return XMMatrixLookAtLH(m_position, m_lookat, m_up); } void camera::updatePosition() { m_position = XMVectorSet(m_x, m_y, m_z, 0.0); m_lookat = XMVectorSet(m_x + m_dx, m_y, m_z + m_dz, 0.0); m_up = XMVectorSet(0.0, 1.0, 0.0, 0.0); }

    Read the article

  • Java - 2d Array Tile Map Collision

    - by Corey
    How would I go about making certain tiles in my array collide with my player? Like say I want every number 2 in the array to collide. I am reading my array from a txt file if that matters and I am using the slick2d library. Here is my code if needed. public class Tiles { Image[] tiles = new Image[3]; int[][] map = new int[500][500]; Image grass, dirt, mound; SpriteSheet tileSheet; int tileWidth = 32; int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.txt")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update() { } public void render(GameContainer gc) { for(int x = 0; x < 50; x++) { for(int y = 0; y < 50; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } I tried something like this, but I it doesn't ever "collide". X and y are my player position. if (tiles.map[(int)x/32][(int)y/32] == 2) { System.out.println("Collided"); }

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • Physics in carrom like game using cocos2d + Box2D

    - by Raj
    I am working on carrom like game using cocos2d + Box2D. I set world gravity(0,0), want gravity in z-axis. I set following values for coin and striker body: Coin body (circle with radius - 15/PTM_RATIO): density = 20.0f; friction = 0.4f; restitution = 0.6f; Striker body (circle with radius - 15/PTM_RATIO): density = 25.0f; friction = 0.6f; restitution = 0.3f; Output is not smooth. When I apply ApplyLinearImpulse(force,position) the coin movement looks like floating in the air - takes too much time to stop. What values for coin and striker make it look like real carrom?

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >