Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 595/1529 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Which purpose do armor points serve?

    - by Bane
    I have seen a mechanic which I call "armor points" in many games: Quake, Counter Strike, etc. Generally, while the player has these armor points, he takes less damage. However, they act in a similar fashion that health points do: you lose them by taking said damage. Why would you design such a feature? Is this just health 2.0, or am I missing something? To me, armor only makes sense in, for example, RPG games, where it is a constant that determines your resistance. But I don't see why would it need to be reduceable during combat.

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • How to make Pokémon White 3D effect?

    - by Pipo
    I just wondered how to create a 3D effect similar to Pokemon White/Black? It seems to be not polygon based, but created just with sprites. If the perspective changes the sprites stay sharp and don't get blurred. How can I archive this? Source: https://www.youtube.com/watch?v=fZEPUPYOnRc&feature=youtube_gdata_player Edit: Wow, two downvotes because I used a video instead of screenshots? Don't get me wrong, I thank you, because you want to help me, but the 3D effect can be better understand in motion. Anyway, here is a screenshot: http://wearearcade.com/wp-content/uploads/2011/03/pokemon-black-white-starter-town.jpg So, if this is a hardware limitation, how can I archive this o na different hardware, e.g. a HTML5 game? Thank you.

    Read the article

  • Movement of body after applying weld joint

    - by ved
    I have two rectangular bodies. I've applied Weldjoint successfully on these bodies. I want to move that joined body by applying linear impulse. After weld joint, these two bodies becomes single body right? How do I apply force/impulse on the joined body? I am using Box2D with LibGDX. I've tried this: polygon1.applyLinearImpulse(new Vector2(-5, 0), polygon1.getWorldCenter(), true); I thought that if I move polygon1 then polygon2 will also move due to my weld joint but it is not working properly. Why don't they move together after being welded?

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Circle-Line Collision Detection Problem

    - by jazzdawg
    I am currently developing a breakout clone and I have hit a roadblock in getting collision detection between a ball (circle) and a brick (convex polygon) working correctly. I am using a Circle-Line collision detection test where each line represents and edge on the convex polygon brick. For the majority of the time the Circle-Line test works properly and the points of collision are resolved correctly. Collision detection working correctly. However, occasionally my collision detection code returns false due to a negative discriminant when the ball is actually intersecting the brick. Collision detection failing. I am aware of the inefficiency with this method and I am using axis aligned bounding boxes to cut down on the number of bricks tested. My main concern is if there are any mathematical bugs in my code below. /* * from and to are points at the start and end of the convex polygons edge. * This function is called for every edge in the convex polygon until a * collision is detected. */ bool circleLineCollision(Vec2f from, Vec2f to) { Vec2f lFrom, lTo, lLine; Vec2f line, normal; Vec2f intersectPt1, intersectPt2; float a, b, c, disc, sqrt_disc, u, v, nn, vn; bool one = false, two = false; // set line vectors lFrom = from - ball.circle.centre; // localised lTo = to - ball.circle.centre; // localised lLine = lFrom - lTo; // localised line = from - to; // calculate a, b & c values a = lLine.dot(lLine); b = 2 * (lLine.dot(lFrom)); c = (lFrom.dot(lFrom)) - (ball.circle.radius * ball.circle.radius); // discriminant disc = (b * b) - (4 * a * c); if (disc < 0.0f) { // no intersections return false; } else if (disc == 0.0f) { // one intersection u = -b / (2 * a); intersectPt1 = from + (lLine.scale(u)); one = pointOnLine(intersectPt1, from, to); if (!one) return false; return true; } else { // two intersections sqrt_disc = sqrt(disc); u = (-b + sqrt_disc) / (2 * a); v = (-b - sqrt_disc) / (2 * a); intersectPt1 = from + (lLine.scale(u)); intersectPt2 = from + (lLine.scale(v)); one = pointOnLine(intersectPt1, from, to); two = pointOnLine(intersectPt2, from, to); if (!one && !two) return false; return true; } } bool pointOnLine(Vec2f p, Vec2f from, Vec2f to) { if (p.x >= min(from.x, to.x) && p.x <= max(from.x, to.x) && p.y >= min(from.y, to.y) && p.y <= max(from.y, to.y)) return true; return false; }

    Read the article

  • Moving 2d camera in the y direction

    - by Alex
    I'm developing a simple game for the iphone and am struggling to work out the best way for the camera to follow the main character. The following picture hightlights the three main components: There are 3 components to this: Circle - the main character Green line - terrain Black background The terrain is simply made from an array of points (approx 20 points per screen width). The terrain is moved in the x direction relative to the black background in order to keep the circle in its position shown. The distance to move the terrain is simply: movex = circle.position.x - terrain.position.x with a constant to fix the circle at some distance from the left of the screen. I am struggling to determine the best way to position the terrain in the y plane keep the focus in the character. I want to move the terrain in the y direction smoothly and not fix it to the position of the circle, so the circle can move in the y plane. If I take the same approach as the x positioning, the character is fixed at a point on the screen and the terrain moves. I could sample some terrain points either side of the character and produce an average, but in my implementation this was not smooth. I thought another approach might be to create a camera 'line' that is a smooth version of the terrain line and make the camerea follow this, but I'm not sure if this is the optimum solution. Any advice is much appreciated!

    Read the article

  • Tetris Movement - Implementation

    - by James Brauman
    Hi gamedev, I'm developing a Tetris clone and working on the input at the moment. When I was prototyping, movement was triggered by releasing a directional key. However, in most Tetris games I've played the movement is a bit more complex. When a directional key is pressed, the shape moves one space in that direction. After a short interval, if the key is still held down, the shape starts moving in the direction continuously until the key is released. In the case of the down key being pressed, there is no pause between the initial movement and the subsequent continuous movement. I've come up with a solution, and it works well, but it's totally over-engineered. Hey, at least I can recognize when things are getting silly, right? :) public class TetrisMover { List registeredKeys; Dictionary continuousPressedTime; Dictionary totalPressedTime; Dictionary initialIntervals; Dictionary continousIntervals; Dictionary keyActions; Dictionary initialActionDone; KeyboardState currentKeyboardState; public TetrisMover() { *snip* } public void Update(GameTime gameTime) { currentKeyboardState = Keyboard.GetState(); foreach (Keys currentKey in registeredKeys) { if (currentKeyboardState.IsKeyUp(currentKey)) { continuousPressedTime[currentKey] = TimeSpan.Zero; totalPressedTime[currentKey] = TimeSpan.Zero; initialActionDone[currentKey] = false; } else { if (initialActionDone[currentKey] == false) { keyActions[currentKey](); initialActionDone[currentKey] = true; } totalPressedTime[currentKey] += gameTime.ElapsedGameTime; if (totalPressedTime[currentKey] = initialIntervals[currentKey]) { continuousPressedTime[currentKey] += gameTime.ElapsedGameTime; if (continuousPressedTime[currentKey] = continousIntervals[currentKey]) { keyActions[currentKey](); continuousPressedTime[currentKey] = TimeSpan.Zero; } } } } } public void RegisterKey(Keys key, TimeSpan initialInterval, TimeSpan continuousInterval, Action keyAction) { if (registeredKeys.Contains(key)) throw new InvalidOperationException( string.Format("The key %s is already registered.", key)); registeredKeys.Add(key); continuousPressedTime.Add(key, TimeSpan.Zero); totalPressedTime.Add(key, TimeSpan.Zero); initialIntervals.Add(key, initialInterval); continousIntervals.Add(key, continuousInterval); keyActions.Add(key, keyAction); initialActionDone.Add(key, false); } public void UnregisterKey(Keys key) { *snip* } } I'm updating it every frame, and this is how I'm registering keys for movement: tetrisMover.RegisterKey( Keys.Left, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Left); }); tetrisMover.RegisterKey( Keys.Right, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Right); }); tetrisMover.RegisterKey( Keys.Down, TimeSpan.Zero, keyHoldMovementInterval, () = { PerformGravity(); }); Issues that this doesn't address: If both left and right are held down, the shape moves back and forth really quick. If a directional key is held down and the turn finishes and the shape is replaced by a new one, the new one will move quickly in that direction instead of the little pause it is supposed to have. I could fix the issues, but I think it will make the solution even worse. How would you implement this?

    Read the article

  • How to avoid game objects accidentally deleting themselves in C++

    - by Tom Dalling
    Let's say my game has a monster that can kamikaze explode on the player. Let's pick a name for this monster at random: a Creeper. So, the Creeper class has a method that looks something like this: void Creeper::kamikaze() { EventSystem::postEvent(ENTITY_DEATH, this); Explosion* e = new Explosion; e->setLocation(this->location()); this->world->addEntity(e); } The events are not queued, they get dispatched immediately. This causes the Creeper object to get deleted somewhere inside the call to postEvent. Something like this: void World::handleEvent(int type, void* context) { if(type == ENTITY_DEATH){ Entity* ent = dynamic_cast<Entity*>(context); removeEntity(ent); delete ent; } } Because the Creeper object gets deleted while the kamikaze method is still running, it will crash when it tries to access this->location(). One solution is to queue the events into a buffer and dispatch them later. Is that the common solution in C++ games? It feels like a bit of a hack, but that might just be because of my experience with other languages with different memory management practices. In C++, is there a better general solution to this problem where an object accidentally deletes itself from inside one of its methods?

    Read the article

  • Blending transition in cocos2d

    - by fiddler
    In my cocos2d-iphone game, I have 2 backgrounds (CCnodes), each containing a quite complex hierarchy of sprites. I would like to make a smooth transition between them: initially, only the first background is visible at the end, only the second one is visible Is there a good way to set the opacity of a full hierarchy of sprites ? I tried to recursively set the opacity of all the contained sprites. It kinda works except that: i guess it's not very efficient i would like the opacity of overlapping sprites to be 'merged' (as if the background was one single big sprite)

    Read the article

  • Isometric smooth fog

    - by marcg11
    I'm working on a simple 2d game with direct3d 9. It's a isometric game with diamond tiles and a staggered map. This is what I have: As you se I have some king of fog which is acomplished by having a fog matrix which is true (clear terrain) or false (obscure terran). But the result is very chunky. The fog moves as the player moves by tiles but not by pixels. Basically I check for every tile if there is fog, if so I just change the color of that tile: if(scene->fog[i+mapx][j+mapy] == FOG_NONE) { tile_color = 0x666666FF; } I also would like the fog to be smoother, for that I followed this "tutorial" but I haven't managed to work it it out. http://www.appsizematters.com/2010/07/how-to-implement-a-fog-of-war-part-2-smooth/

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Music for Kids Game!

    - by Dane
    I'm developing a Multimedia Software for Kindergarten Kids. It introduce them to animals, Alphabets, Simple Math, Colors and it contain some simple games. Music is very crucial for my project and it is very important to choose the right sort of music for different sections. But unfortunately I know nothing about music. Is there a music consultant firm which can help me to choose melodies and rythmes for my project from free music available in internet. My Budget is limited but as this is mandatory and I have no knowledge or taste about music, I think I can afford to pay for this.

    Read the article

  • Component-based Rendering

    - by Kikaimaru
    I have component Renderer, that Draws Texture2D (or sprite) According to component-based architecture i should have only method OnUpdate, and there should be my rendering code, something like spriteBatch.Draw(Texture, Vector2.Zero, Color.White) But first I need to do spriteBatch.Begin();. Where should i call it? And how can I make sure it's called before any Renderer components OnUpdate method? (i need to do more stuff then just Begin() i also need to set right rendertarget for camera etc.)

    Read the article

  • In concept how is Animation done?

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. As a jump a perfect sine function is acceptable but for motions of arms, weapons or face it would look quite unnatural. Moreover patching every animation out of sine and cosine is stretched to its limits soon. I head of skeletons and rigging already. Although I could not implement skeletal animations I can't imagine that quite natural animations in major games are made of static predefined motion states. So how in general is animation done today?

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • Procedural object generation and unique identification

    - by 2080
    My question relates to procedural content generation and data management of the emerging objects in a database. I assume a networked game, with a server-client model. Unspecified objects in the game world are generated while the game is running with procedural algorithms (for example perlin noise). The players (/clients) can modify the properties of these objects, but have to notify the server of these changes. How could this communication address unique objects, so that both the server and the client know of which object they are speaking? Not only the inner properties of the objects can differ, but also visible, such as the position. When the player wants to select one of these objects the game has to find out the id - does anyone know which methods or algorithms can accomplish that?

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • Any way to set up a grid for a board game in cocos 2d?

    - by Scott
    My first idea was to create a 2d array for my columns and rows, but it seems like there should be a better, or possibly cleaner, way to achieve this. Each square on the grid is going to have a background image, probably a .png although I might just draw the images with a draw method. Basically, I want to be able to drag and drop images onto the individual grid squares. I've been searching for a solution and the closest thing I can find is the tiled map solution. That just seems like a little overkill for what I'm trying to accomplish. Also, I don't know if this helps but i need my grid to be 12 by 12 and take up the entire width of the iphone screen.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >