Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 585/1529 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • PHP Battle System for RPG game

    - by Jay
    I posted this a while ago on stackoverflow, they thought it would be better place here, I agree. Essentially I know what I want to accomplish, and I have something to the effect of what I want but I am not satisfied with it. Here's the problem. Each user has some states: STR (how hard they hit), DEF (dodging/blocking attacks), SPD (when they can strike), and STAMINA (basically their endurance in game, if this runs out they can no longer fight and lose) What I need is something like this: UserA Stats: STR: 1,000 DEF: 2500 SPD: 2000 (HP: 1000/1000) UserB Stats: STR: 1,500 DEF: 500 SPD: 4000 (HP: 1000/1000) Because the second user has double the speed, he lands twice the amount of hits on the first user, before he gets hit. Because he has less strength than the first users defence, he will do no, to little damage. This is how the battle would theoretically go: UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage, and sends him to the hospital! I was using this code, which is buggy, and not efficient, I just need a better way to do this: http://pastebin.com/15LiQQuJ Oh, and if anyone has some good ideas on how to improve the concept that would be cool too! It's not that elaborate so I'll be thinking of all sorts of things to make it more dynamic. Thanks.

    Read the article

  • OpenGL textures trigger error 1281 if SFML is not called

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. but the first texture IS applied (nothing else is working though). The strange thing is : if i create a dummy texture with SFML in the same program, all other textures do work. So i guess there is something i forgot in the texture creation/application, if someone could enlighten me. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great.

    Read the article

  • How to mimic the same fixed-size horizon as in this racing game?

    - by Aybe
    I am trying to replicate the same horizon (buildings and sky) as in the image below: As you can see, the player has advanced in a straight line, yet the horizon has still the same size: This is my attempt using 3D, while it's okay when the player is on the start line: It's not so great when the player advanced as much as in the image no. 2: This is an overview of where the horizon buildings and sky are located: Obviously this won't achieve such effect when one is close to it, so I've tried to scale up the horizon on all axes but the problem is that the buildings are too small depending where you look at them from. How can one mimic such rendering ?

    Read the article

  • Which will be faster? Switching shaders or ignore that some cases don't need full code?

    - by PolGraphic
    I have two types of 2d objects: In first case (for about 70% of objects), I need that code in the shader: float2 texCoord = input.TexCoord + textureCoord.xy But in the second case I have to use: float2 texCoord = fmod(input.TexCoord, texCoordM.xy - textureCoord.xy) + textureCoord.xy I can use second code also for first case, but it will be a little slower (fmod is useless here, input.TexCoord will be always lower than textureCoord.xy - textureCoord.xy for sure). My question is, which way will be faster: Making two independent shaders for both types of rectangles, group rectangles by types and switch shaders during rendering. Make one shader and use some if statement. Make one shader and ignore that sometimes (70% of cases) I don't need to use fmod.

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • Normal map lighting bug in bottom right quadrant

    - by Ryan Capote
    I am currently working on getting normal maps working in my project, and have run into a problem with lighting. As you can see, the normals in the bottom right quadrant of the lighting isn't calculating the correct direction to the light or something. Best seen by the red light If I use flat normals (z normal = 1.0), it seems to be working fine: normals for the tile sheet: Shader: #version 330 uniform sampler2D uDiffuseTexture; uniform sampler2D uNormalsTexture; uniform sampler2D uSpecularTexture; uniform sampler2D uEmissiveTexture; uniform sampler2D uWorldNormals; uniform sampler2D uShadowMap; uniform vec4 uLightColor; uniform float uConstAtten; uniform float uLinearAtten; uniform float uQuadradicAtten; uniform float uColorIntensity; in vec2 TexCoords; in vec2 GeomSize; out vec4 FragColor; float sample(vec2 coord, float r) { return step(r, texture2D(uShadowMap, coord).r); } float occluded() { float PI = 3.14; vec2 normalized = TexCoords.st * 2.0 - 1.0; float theta = atan(normalized.y, normalized.x); float r = length(normalized); float coord = (theta + PI) / (2.0 * PI); vec2 tc = vec2(coord, 0.0); float center = sample(tc, r); float sum = 0.0; float blur = (1.0 / GeomSize.x) * smoothstep(0.0, 1.0, r); sum += sample(vec2(tc.x - 4.0*blur, tc.y), r) * 0.05; sum += sample(vec2(tc.x - 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x - 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x - 1.0*blur, tc.y), r) * 0.15; sum += center * 0.16; sum += sample(vec2(tc.x + 1.0*blur, tc.y), r) * 0.15; sum += sample(vec2(tc.x + 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x + 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x + 4.0*blur, tc.y), r) * 0.05; return sum * smoothstep(1.0, 0.0, r); } float calcAttenuation(float distance) { float linearAtten = uLinearAtten * distance; float quadAtten = uQuadradicAtten * distance * distance; float attenuation = 1.0 / (uConstAtten + linearAtten + quadAtten); return attenuation; } vec3 calcFragPosition(void) { return vec3(TexCoords*GeomSize, 0.0); } vec3 calcLightPosition(void) { return vec3(GeomSize/2.0, 0.0); } float calcDistance(vec3 fragPos, vec3 lightPos) { return length(fragPos - lightPos); } vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) { return normalize(lightPos - fragPos); } vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) { float diffuseFactor = dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir); vec4 diffuse = vec4(0.0); vec4 lightColor = uLightColor * uColorIntensity; if(diffuseFactor > 0.0) { diffuse = vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0); diffuse *= diffuseFactor; lightColor *= diffuseFactor; } else { discard; } vec4 final = (diffuse + lightColor); if(texture2D(uWorldNormals, worldUV).g > 0.0) { return final * attenuation; } else { return final * occluded(); } } void main(void) { vec3 fragPosition = calcFragPosition(); vec3 lightPosition = calcLightPosition(); float distance = calcDistance(fragPosition, lightPosition); float attenuation = calcAttenuation(distance); vec2 worldPos = gl_FragCoord.xy / vec2(1024, 768); vec3 lightDir = calcLightDirection(fragPosition, lightPosition); lightDir = (lightDir*0.5)+0.5; float atten = calcAttenuation(distance); vec4 emissive = texture2D(uEmissiveTexture, worldPos); FragColor = calcFinalLight(worldPos, lightDir, atten) + emissive; }

    Read the article

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

  • Check for bodies within a specific circle in Box2D

    - by ltjax
    I'm trying to find positions to insert new bodies into my world. For that, I'd like to have a "free" spot where this body wouldn't overlap with anything else. So my plan was to sample "random" positions and check whether they overlap with my "potential" new body. Since my bodies are always circular, I'd need to test within a given circle. So far, the only way to use box2d for this seems to use b2World::QueryAABB around my circle and manually doing an overlap test with all the fixtures it gives me (Box2D doesn't event seem to allow me to tap into its overlapping tests?!). It seems to me like Box2D should already provide such functionality - is there a way that lets me do this without reinventing most of the wheel again?

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Change alpha to a Frame in libgdx

    - by Rudy_TM
    I have this batch.draw(currentFrame, x, y, this.parent.originX, this.parent.originY, this.parent.width, this.parent.height, this.scaleX, this.scaleY,this.rotation); I want to apply the alpha that it gets from the method, but theres is not overload from the SpriteBatch class that takes the alpha value, is there some wey to apply it? (i did it this way, because this are animation, and i wanted to control them) in my static ones i apply sprite.draw(SpriteBatch, alpha) Thanks

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Dynamic body implementation

    - by ArturoVM
    I am writing a 2D game where one of the characters has some very particular requirements. This character is a body with no particular shape (similar to a fluid, but not so much), it has to be able to grow and shrink (as in actually growing, not just scaling), and it has to have collision detection (even if it's basic). Because of this requirements, it obviously can't be based on a sprite, so direct rendering of the shape should be the logical thing to do. I assume this is no easy task, but I just couldn't find a good physics engine that covers these requirements (or at least no tutorial on how to do it; I particularly searched for Box2D tutorials). Is there a way of doing this with Box2D, SDL, or any other physics or game engine out there? If not, what's a good place to start? I am really clueless as far as soft-body physics are concerned.

    Read the article

  • Different ways to pass Textures into HLSL shaders

    - by codymanix
    The GraphicsDevice class of xna 4 has the properties Textures and VertexTextures. What is the exact difference? I don't really understand what MSDN tells me about this. I usually use Effect parameters to pass textures to my HLSL shaders. What are the differences between these methods, which is faster? My Scenario: I am working on a minecraft like game, which means lots of separate DrawPrimitives calls and change current Texture often since I have lots of different block types. Since I use an Octtree to organize the world, I cannot easily sort by texture.

    Read the article

  • Smooth pixels while rotating sprite

    - by goodm
    I just started with andengine, so this maybe gonna be silly question. How to make my sprites more smooth while I rotate them? Or maybe it because this is screenshot from tablet? Thanks JohnEye it works: Just need to change my BitmapTextureAtlas from: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63); to: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63, TextureOptions.BILINEAR);

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • How can I achieve strong typing with a component messaging system?

    - by Vaughan Hilts
    I'm looking at implementing a messaging system in my entity component system. I've deduced that I can use an event / queue for passing messages, but right now, I just use a generic object and cast out the data I want. I also considered using a dictionary. I see a lot of information on this, but they all involve a lot of casting and guessing. Is there any way to do this elegantly and keep strong typing on my messages?

    Read the article

  • projected textures not appear on the "back" of the mesh as well?

    - by user975135
    I want to create blood wounds on my character's bodies by using projected textures. I've watched some commentaries on games like Left 4 Dead and they say they use projected textures for the blood. But the way projected textures work is that if you project a texture on a rigged character, say his chest, it will also appear on his back. So what's the trick? How to get projected textures appear only on one "side" of the mesh? I use the Panda3D game engine, if that will help.

    Read the article

  • ways to program glitch style effects

    - by okkk
    Most tutorials for generating glitch art usually has to do with some form of manipulation of the compression of files. Should my goal instead to replicate the look of these glitches in shaders or is it somehow possible to authentically generate the compression artifacts in real time? Example: This effect which I'm particularly interested is referred to as datamoshing. It does "things" using the p-frames of a video (frames that I think store just the change in pixels). I feel like I need a better understanding of both graphics programming and data-compression.

    Read the article

  • Does SFML render graphics outside the window?

    - by ThePlan
    While working on a tile-based map I figured it would be a good idea if I would only render what the player sees on the game window, but then it occurred to me that SFML could already be optimized enough to know when it doesn't have to render those things. Let's say I draw a 30x30 squared maps (A medium one) but the player only sees a bunch of them, not entirely. Would SFML automatically hide what the player doesn't see, or should I hide it myself?

    Read the article

  • Minecraft style XNA game collision?

    - by Levi
    I've been trying to get this working for ages now, I can detect if there's a solid block at any place on the map and I can check how far something is inside of it, but I don't understand how to fix the collision. I've tried loads of ways and all of them end up by the player getting stuck, glitching around, incorrect responses and I really have no idea how to go about this :/. int Chnk = Utility.GetChunkFromPosition(origin); if (Chnk == -1) return; Vector3 Pos = Utility.GetCubeVectorFromPosition(origin); if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)Pos.Y, (byte)Pos.Z] != 0) { isInIllegalState = true; if (velocity.Y < 0f) velocity.Y = 0f; } while (isInIllegalState) { if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)origin.Y, (byte)Pos.Z] != 0) origin.Y = (int)(origin.Y + 1); else isInIllegalState = false; } if (origin.Y < Chunk.YSize - 2 && GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)(origin.Y + playerHeight.Y), (byte)Pos.Z] != 0) { velocity.Y = 0f; //Acceleration.Y = 0f; origin.Y = (int)origin.Y;// -0.5f; } for (int x = -1; x <= 1; x+=2) { for (int z = -1; z <= 1; z += 2) { Vector3 CornerPosition = new Vector3(boundingSize * x, 0, boundingSize * z); bool CorrectX = false; bool CorrectZ = false; Vector3 RoundedOrigin = Utility.RoundVector(origin); Vector3 RoundedCorner = Utility.RoundVector(origin + CornerPosition); byte BlockAdjacent = Utility.GetCubeFromPosition(origin + CornerPosition); if (BlockAdjacent == 0) continue; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z != RoundedOrigin.Z) { CorrectX = true; CorrectZ = true; } if (RoundedCorner.Z != RoundedOrigin.Z && RoundedCorner.X == RoundedOrigin.X) CorrectZ = true; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z == RoundedOrigin.Z) CorrectX = true; if (CorrectX && CornerPosition.X > 0) { if (origin.X > 0f) origin.X = (int)(origin.X + 1) - boundingSize; else origin.X = (int)origin.X - boundingSize; } else if (CorrectX && CornerPosition.X < 0) { if (origin.X > 0f) origin.X = (int)(origin.X) + boundingSize; else origin.X = (int)(origin.X - 1) + boundingSize; } if (CorrectZ && CornerPosition.Z > 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z + 1) - boundingSize; else origin.Z = (int)origin.Z - boundingSize; } else if (CorrectZ && CornerPosition.Z < 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z) + boundingSize; else origin.Z = (int)(origin.Z - 1) + boundingSize; } } }

    Read the article

  • My frustum culling is culling from the wrong point [SOLVED]

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] *= length; m_Planes[side][1] *= length; m_Planes[side][2] *= length; m_Planes[side][3] *= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } Matrices look like (Camera is at (5, 0, 0)): ModelView [0,0,0.99,0] [0,1,0,0] [-0.99,0,0,0] [0,0,-5,1] Projection [0.814,0,0,0] [0,1.303,0,0] [0,0,-1,0] [0,0,-0.02,0] Clip [0,0,-1,-0.999] [0,1.30,0,0] [-0.814,0,0,0] [0,0,4.98,4.99] What am i doing wrong?

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >