Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 470/1285 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Rendering different materials in a voxel terrain

    - by MaelmDev
    Each voxel datapoint in my terrain model is made up of two properties: density and material type. Each is stored as an unsigned integer value (but the density is interpreted as a decimal value between 0 and 1). My current idea for rendering these different materials on the terrain mesh is to store eleven extra attributes in each vertex: six material values corresponding to the materials of the voxels that the vertices lie between, three decimal values that correspond to the interpolation each vertex has between each voxel, and two decimal values that are used to determine where the fragment lies on the triangle. The material and interpolation attributes are the exact same for each vertex in the triangle. The fragment shader samples each texture that corresponds to each material and then uses the aforementioned couple of decimal values to interpolate between these samples and obtain the final textured color of the fragment. It should work fine, but it seems like a big memory hog. I won't be able to reuse vertices in the mesh with indexing, and each vertex will have a lot of data associated with it. It also seems pretty slow. What are some ways to improve or replace this technique for drawing materials on a voxel terrain mesh?

    Read the article

  • DX9 Deferred Rendering, GBuffer displays as clear color only

    - by Fire31
    I'm trying to implement Catalin Zima's Deferred Renderer in a very lightweight c++ DirectX 9 app (only renders a skydome and a model), at this moment I'm trying to render the gbuffer, but I'm having a problem, the screen shows only the clear color, no matter how much I move the camera around. However, removing all the render target operations lets the app render the scene normally, even if the models are being applied the renderGBuffer effect. Any ideas of what I'm doing wrong?

    Read the article

  • Pygame: Save a list of objects/classes/surfaces

    - by Sam Tubb
    I am working on a game, in which you can create mazes. You place blocks on a 16x16 grid, while choosing from a variety of block to make the level with. Whenever you create a block, it adds this class: class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) to a list called instances. I tried shelving it to a .bin file, but it returns some error dealing with surfaces. How can I go about saving and loading levels? Any help is appreciated! :) Here is the whole code for reference: import pygame from pygame.locals import * #initstuff pygame.init() screen=pygame.display.set_mode((640,480)) pygame.display.set_caption('PiMaze') instances=[] #loadsprites menuspr=pygame.image.load('images/menu.png').convert() b1spr=pygame.image.load('images/b1.png').convert() b2spr=pygame.image.load('images/b2.png').convert() currentbspr=b1spr curspr=pygame.image.load('images/curs.png').convert() curspr.set_colorkey((0,255,0)) #menu menuspr.set_alpha(185) menurect=menuspr.get_rect(x=-260,y=4) class MenuItem(object): def __init__(self,pos,spr): self.x=pos[0] self.y=pos[1] self.sprite=spr self.pos=(self.x,self.y) self.rect=self.sprite.get_rect(x=self.x,y=self.y) class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) while True: #menu items b1menu=b1spr.get_rect(x=menurect.left+32,y=48) b2menu=b2spr.get_rect(x=menurect.left+64,y=48) menuitems=[MenuItem(b1menu,b1spr),MenuItem(b2menu,b2spr)] screen.fill((20,30,85)) mse=pygame.mouse.get_pos() key=pygame.key.get_pressed() placepos=((mse[0]/16)*16,(mse[1]/16)*16) if key[K_q]: if mse[0]<260: if menurect.right<255: menurect.right+=1 else: if menurect.left>-260: menurect.left-=1 else: if menurect.left>-260: menurect.left-=1 for e in pygame.event.get(): if e.type==QUIT: exit() if menurect.right<100: if e.type==MOUSEBUTTONUP: if e.button==1: to_remove = [i for i in instances if i.rect.collidepoint(placepos)] for i in to_remove: instances.remove(i) if not to_remove: instances.append(Block(placepos[0],placepos[1],currentbspr)) for i in instances: screen.blit(i.sprite,i.rect) if not key[K_q]: screen.blit(curspr,placepos) screen.blit(menuspr,menurect) for item in menuitems: screen.blit(item.sprite,item.pos) if item.rect.collidepoint(mse): if pygame.mouse.get_pressed()==(1,0,0): currentbspr=item.sprite pygame.draw.rect(screen, ((255,0,0)), item, 1) pygame.display.flip()

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Is using multiple canvas objects a good practice?

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: We have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practice? Having multiple canvas to draw? Each game loop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it every time and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive...

    Read the article

  • C# and Unity - Learning to Develop a game by developing the game I want to develop

    - by 97s
    So I am pretty new to C#, I have some python and javascript experience, but nothing substantial. I have read a lot about C# and Unity and I know they are the tools I want to use. My question is: Should I be reading books about C# or should I just start hacking in unity and piecing the game together part by part? Right now I am going through the book, HeadFirst C#, and it is very good, but I taught myself web design and javascript by just creating and hacking until I got the results I wanted then looked at other code to see ways they did it and improved my code. The issue is that with the browser I got immediate results and it was all under one roof, where developing games is a completely different monster. I am just wondering if my time would be better spent buying a book that uses C# to teach you unity, and doing that instead, or if the time spent in HeadFirst book is going to be valuable. Thanks a ton, I am having difficulties using my time, and I just want to maximize it as I don't have a lot of free time. Edit: Hopefully this isn't to broad? If it is, I will delete and go elsewhere just let me know. Thanks.

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • How to play many sounds at once in OpenAL

    - by Krom
    Hello, I'm developing an RTS game and I would like to add sounds to it. My choice has landed on OpenAL. I have plenty of units which from time to time make sounds: fSound.Play(sfx_shoot, location). Sounds often repeat, e.g. when squad of archers shoots arrows, but they are not synced with each other. My questions are: What is the common design pattern to play multiple sounds in OpenAL, when some of them are duplicate? What are the hardware limitations on sounds count and tricks to overcome them?

    Read the article

  • What is a good way to measure game virality?

    - by Chris Garrett
    I have added some social features to an iPhone game (Lexitect if you're curious), such as email, Twitter, and Facebook integration for sharing high scores. Along with these features, I am measuring how many times users make it to each step. The goal of these features are to make the game more viral, and I am trying to get to a measure of game virality. I would think that a game virality metric would produce a number based on 1.0, where 1.0 = zero viral growth, and 1.01 would represent 1% viral growth over some unit of time. How is virality normally measured, and in what units? How is time capped on the metric? i.e. if I gave each player a year to determine how many recommendations they make, I wouldn't get any real numbers for a year from the time I start tracking it. Are there any standards for tracking virality in a meaningful way?

    Read the article

  • 3D Paint tool in Maya

    - by Joris
    Hey everyone, could someone help me out on this question? When I am trying to texture objects in maya (for example a barrel) I want to texture it with a brush. When I try to use the 3D painting tool it al gets black even when I have a image file selected. Also the resolution is very very low of the models and textures in maya, though the texture image is very high quality. Can someone please help me? Thanks so much for helping.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • How To Smoothly Animate From One Camera Position To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • Skyrim Creation Kit with Xbox 360

    - by funseiki
    I posted this on stackoverflow but was advised to post here (here is a link to the stackoverflow question). I'm hoping for constructive feedback on its plausibility. Update on progress: It looks like there are ways to stuff files back onto the console (horizon, modio, xplorer360, etc) and they do require some form of signing. As of now, though, I've had no luck. I was hoping I could get away with just placing the ".esp" into the directory containing marketplace downloads for Skyrim, along with the signed ".bsa" file (basically a zipped up file containing any extra content the .esp will need to refer that doesn't exist in the basic game). This doesn't work, at least not in the ways I've tried, so next I'm going to try install the entire game to my flash drive (if possible) and attempt to traverse through the game's directory (this is probably unlikely). If anyone else has suggestions or luck or wants more detail on my failures comment/answer away. Here is the question: I'm thinking about buying the PC version of Skyrim to get the Creation Kit (I already own a copy for the Xbox). I have read the faq and scoured plenty of forums to see if there was some way to mod Skyrim for a console (Xbox 360, in particular), but they are generally coming up negative. I realize the CreationKit is on the PC, but I was wondering if there was a way to set up the '.esp' (hopefully I'm interpreting this correctly) files to be placed on the Xbox 360 file system in a similar manner to how game add-ons are downloaded from the Xbox Live Marketplace. I believe it is possible to transfer saves between the console and the PC (e.g. google: 'skyrim mod xbox360'), but these are referencing items that already exist in the game (e.g. a console command for maximum carry weight does not require reference to new animations or models). It would probably be easier if one could navigate through the xbox's file system to see where the games' files are placed, but with the current setup, the file system is abstracted away. Any help or insight on the matter would be much appreciated. I would love to work on a project that would make it possible to let console gamers experience and enjoy all the great mods available to the PC community.

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • MMORPG design for time-limited players

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • Best way to render card images

    - by user1065145
    I have high-quality SVG card images, but they drastically lose their quality when I downsize them. I have tried two ways of rendering cards (using Inkscape and Imagemagics): 1) Render SVG to high-res PNG and resize it then; 2) Render SVG to image of proper size at once. Both approaches generate blurry card images, which looks even worse than old Windows cards. What are the best way to generate smaller card images from SVG sources and not to loose their quality a lot?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >