Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 618/1414 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Lag compensation with networked 2D games

    - by Milo
    I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation. What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue. Basically how does this work for shooting and stuff like that? Thanks

    Read the article

  • Circular class dependency

    - by shad0w
    Is it bad design to have 2 classes which need each other? I'm writing a small game in which I have a GameEngine class which has got a few GameState objects. To access several rendering methods, these GameState objects also need to know the GameEngine class - so it's a circular dependency. Would you call this bad design? I am just asking, because I am not quite sure and at this time I am still able to refactor these things.

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • How to design a game engine in an object-oriented language?

    - by chuzzum
    Whenever I try and write a game in any object-oriented language, the first problem I always face (after thinking about what kind of game to write) is how to design the engine. Even if I'm using existing libraries or frameworks like SDL, I still find myself having to make certain decisions for every game, like whether to use a state machine to manage menus, what kind of class to use for resource loading, etc. What is a good design and how would it be implemented? What are some tradeoffs that have to be made and their pros/cons?

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • How to Point sprite's direction towards Mouse or an Object [duplicate]

    - by Irfan Dahir
    This question already has an answer here: Rotating To Face a Point 1 answer I need some help with rotating sprites towards the mouse. I'm currently using the library allegro 5.XX. The rotation of the sprite works but it's constantly inaccurate. It's always a few angles off from the mouse to the left. Can anyone please help me with this? Thank you. P.S I got help with the rotating function from here: http://www.gamefromscratch.com/post/2012/11/18/GameDev-math-recipes-Rotating-to-face-a-point.aspx Although it's by javascript, the maths function is the same. And also, by placing: if(angle < 0) { angle = 360 - (-angle); } doesn't fix it. The Code: #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include "math.h" int main(void) { int width = 640; int height = 480; bool exit = false; int shipW = 0; int shipH = 0; ALLEGRO_DISPLAY *display = NULL; ALLEGRO_EVENT_QUEUE *event_queue = NULL; ALLEGRO_BITMAP *ship = NULL; if(!al_init()) return -1; display = al_create_display(width, height); if(!display) return -1; al_install_keyboard(); al_install_mouse(); al_init_image_addon(); al_set_new_bitmap_flags(ALLEGRO_MIN_LINEAR | ALLEGRO_MAG_LINEAR); //smoother rotate ship = al_load_bitmap("ship.bmp"); shipH = al_get_bitmap_height(ship); shipW = al_get_bitmap_width(ship); int shipx = width/2 - shipW/2; int shipy = height/2 - shipH/2; int mx = width/2; int my = height/2; al_set_mouse_xy(display, mx, my); event_queue = al_create_event_queue(); al_register_event_source(event_queue, al_get_mouse_event_source()); al_register_event_source(event_queue, al_get_keyboard_event_source()); //al_hide_mouse_cursor(display); float angle; while(!exit) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: exit = true; break; /*case ALLEGRO_KEY_LEFT: degree -= 10; break; case ALLEGRO_KEY_RIGHT: degree += 10; break;*/ case ALLEGRO_KEY_W: shipy -=10; break; case ALLEGRO_KEY_S: shipy +=10; break; case ALLEGRO_KEY_A: shipx -=10; break; case ALLEGRO_KEY_D: shipx += 10; break; } }else if(ev.type == ALLEGRO_EVENT_MOUSE_AXES) { mx = ev.mouse.x; my = ev.mouse.y; angle = atan2(my - shipy, mx - shipx); } // al_draw_bitmap(ship,shipx, shipy, 0); //al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy, degree * 3.142/180,0); al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy,angle, 0); //I directly placed the angle because the allegro library calculates radians, and if i multiplied it by 180/3. 142 the rotation would go hawire, not would, it actually did. al_flip_display(); al_clear_to_color(al_map_rgb(0,0,0)); } al_destroy_bitmap(ship); al_destroy_event_queue(event_queue); al_destroy_display(display); return 0; } EDIT: This was marked duplicate by a moderator. I'd like to say that this isn't the same as that. I'm a total beginner at game programming, I had a view at that other topic and I had difficulty understanding it. Please understand this, thank you. :/ Also, while I was making a print of what the angle is I got this... Here is a screenshot:http://img34.imageshack.us/img34/7396/fzuq.jpg Which is weird because aren't angles supposed to be 360 degrees only?

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • How many VBOs should I use and should I keep a copy of their data?

    - by CSharpie
    Firstofall, I am sorry if my question is to broad. I am developing a tile based game and switched from those gl.Begin calls to using VBOs. This is kind of working allready, I managed to render a hexagonal polygon with a simple shader applied. What I am not sure is, how to implement the "whole" tile concept. Concrete the questions are: Is it better to create 1 VBO for a single tile and render it n-Times in every different position, or render one huge VBO that represents the whole "world" Depending on the answer above, what is the best way to draw a "linegrid". Overlay with the same vbo using the respecting polygon.mode , or is there a way to let the shader to this? How would frustum-culling or mousepicking work then, do i need to keep the VBO-data in memory?

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • record and replay directinput events

    - by cloudraven
    I am trying to build a record and replay system for a couple of games. I was wondering if I can make a general replay engine using directinput rather than doing an specific implementation for each game. Recording DirectInput events doesn't seem to be that much of a problem, but I don't know if there is a way to play them back. My question is, is there a way to feed DirectInput events from a log and make DirectInput believe that they came from mouse/joystick/keyboard? I assume it is unlikely, but if there is a way I would be interested in learning about it.

    Read the article

  • Drawing Grid in 3D view - Mathematically calculate points and draw line between them (Not working)

    - by Deukalion
    I'm trying to draw a simple grid from a starting point and expand it to a size. Doing this mathematically and drawing the lines between each point, but since the "DrawPrimitives(LineList)" doesn't work the way it should work, And this method can't even draw lines between four points to create a simple Rectangle, so how does this method work exactly? Some sort of coordinate system: [ ][ ][ ][ ][ ][ ][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.0][ ][0.0][ ][2.0][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][ ][ ][ ][ ][ ][ ] I've checked with my method and it's working as it should. It calculates all the points to form a grid. This way I should be able to create Points where to draw line right? This way, if I supply the method with Size = 2 it starts at 0,0 and works it through all the corners (2,2) on each side. So, I have the positions of each point. How do I draw lines between these? VerticeCount = must be number of Points in this case, right? So, tell me, if I can't supply this method with Point A, Point B, Point C, Point D to draw a four vertice rectangle (Point A - B - C - D) - how do I do it? How do I even begin to understand it? As far as I'm concered, that's a "Line list" or a list of points where to draw lines. Can anyone explain what I'm missing? I wish to do this mathematically so I can create a a custom grid that can be altered.

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • Realistic Jumping

    - by Seth Taddiken
    I want to make the jumping that my character does more realistic. This is what I've tried so far but it doesn't seem very realistic when the player jumps. I want it to jump up at a certain speed then slow down as it gets to the top then eventually stopping (for about one frame) and then slowly going back down but going faster and faster as it goes back down. I've been trying to make the speed at which the player jumps up slow down by one each frame then become negative and go down faster... but it doesn't work very well public bool isPlayerDown = true; public bool maxJumpLimit = false; public bool gravityReality = false; public bool leftWall = false; public bool rightWall = false; public float x = 76f; public float y = 405f; if (Keyboard.GetState().IsKeyDown(up) && this.isPlayerDown == true && this.y <= 405f) { this.isPlayerDown = false; } if (this.isPlayerDown == false && this.maxJumpLimit == false) { this.y = this.y - 6; } if (this.y <= 200) { this.maxJumpLimit = true; } if (this.isPlayerDown == true) { this.y = 405f; this.isPlayerDown = true; this.maxJumpLimit = false; } if (this.gravityReality == true) { this.y = this.y + 2f; this.gravityReality = false; } if (this.maxJumpLimit == true) { this.y = this.y + 2f; this.gravityReality = true; } if (this.y > 405f) { this.isPlayerDown = true; }

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Shared Object Not saving the level Progress

    - by user3536228
    I am making a flash game in which i have a variable levelState that describes the current level in which user has entered I am using SharedObject to save the progress but it does not do so first i declred a clas level variable private var levelState:Number = 1; private var mySaveData:SharedObject = SharedObject.getLocal("levelSave"); in the Main function i am checking if it is a first run of the game like below if (mySaveData.data.levelsComplete == null) { mySaveData.data.levelsComplete = 1; } and in a function where the winning condition is checked so that levelState could be increased i am usin this sharedobject to hold the value of levelState if (/*winniing condition*/) levelState++; mySaveData.data.levelsComplete = levelState; mySaveData.flush(); setNewLevel(levelState); } but when i play the game clear a level and again run the game it does not start from that level it starts from beginning.

    Read the article

  • Drawing flaming letters in 3D with OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What I want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to create a path along which I should add many particle emitters that will be emitting flame particles? I understand the concept for 2D, but for 3D are the particles always supposed to be facing the user? Something else I'm worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time, and each of these elements will have many particle emitters. More detailed explanation: I have a path of points, which is my model. Imagine a dotted letter "S" for example. I want make the "S" be on fire. The "S" is just an example it can be a circle, triangle, a line, pretty much any path described by my set of points. For achieving this fire effect I thought about using particles. So I am using a program called "Particle Designer" to create a fire style particle emitter. This emitter looks perfect on 2D on the iphone screen dimensions. So then I thought that I could probably draw an S or any other figure if i place many particle emitters next to each other following the path described. To move from the 2D version to the 3D version I thought about, scaling the emitter (with a scale matrix multiplication in its model matrix) and then moving it to a point in my 3D world. I did this and it works. So now I have 1 particle emitter in the 3D world. My question is, is this how you would achieve a flaming letter? Is this too inefficient if i expect to have many flaming paths on my world? Am i supposed to rotate the particle's quad so that its always looking at the user? (the last one is because i noticed that if u look at it from the side the particles start to flatten out)

    Read the article

  • Tunnels in pseudo 3D racing game

    - by Nicholas
    How would one go about doing tunnels in a pseudo 3D racing game ? The main problem I have at the moment is perspective - I cant think of a way, beyond having to Z sort the sprites and tunnel coordinates, so that vehicles are displayed in front of the tunnel entrance and somehow block the display when out of site. I would like my tunnels to be used on both flat, curved and hills and slopes. The tunnel enterance/exit is made up of 3 separate graphics, (left, right and top), whilst inside the tunnel it is just one line graphic along the top (the idea being its supposed to be a set distance above the current vertical road position). As you can see from the picture, the vehicles are still being rendered whilst in the tunnel. I've converted the Code Incomplete road system to GLBasic.

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >