Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 401/1027 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • XNA - Finding boundaries in isometric tilemap

    - by Yheeky
    I have an issue with my 2D isometric engine. I'm using my own 2D camera class which works with matrices and need to find the tilemaps boundaries so the user always sees the map. Currently my map size is 100x100 (with 128x128 tiles) so the calculation (e.g. for the right boundary) is: var maxX = (TileMap.MapWidth + 1) * (TileMap.TileWidth / 2) - ViewSize.X; var maxX = (100 + 1) * (128 / 2) - 1360; // = 5104 pixels. This works fine while having scale factor of 1.0f but not for any other zoom factor. When I zoom out to 0.9f the right border should be at approx. 4954. I´m using the following code for transformation but I always get a wrong value: var maxXVector = new Vector2(maxX, 0); var maxXTransformed = Vector2.Transform(maxXVector, tempTransform).X; The result is 4593. Does anyone of you have an idea what I´m during wrong? Thanks for your help! Yheeky

    Read the article

  • Unity Particle System collision detection problem

    - by Krav
    I'm using Unity 3.5.5f3 wich has the Shuriken particle system. I've made a blood particle system based on Unity's demos. (Exploding paint [Blood]) The blood is flowing and when it collides with a Plane Transform wich I've created a small pool of blood spawns as a Collision Sub Emitter. My main problem is that when I want to add another object to collide it just doesn't want to work. When I create a cube, and set it as a collision plane the collision will only occur at the half of the cube. I want this to happen: When it reaches the cube's surface the sub emmiter activates, and when the surface is horizontal it appears horizontally, and if it's vertical then vertically. Now it just appears horizontally everytime like in the picture. How could I solve it?

    Read the article

  • Execute code at specific intervals, only once?

    - by Mathias Lykkegaard Lorenzen
    I am having an issue with XNA, where I want to execute some code in my Update method, but only at a given interval, and only once. I would like to avoid booleans to check if I've already called it once, if possible. My code is here: if ((gameTime.TotalGameTime.TotalMilliseconds % 500) == 0) { Caret.Visible = !Caret.Visible; } As you may have guessed, it's for a TextBox control, to animate the caret between invisible and visible states. I just have reason to believe that it is called twice or maybe even 3 times in a single update-call, which is bad, and makes it look unstable and jumpy.

    Read the article

  • 3d vertex translated onto 2d viewport

    - by Dan Leidal
    I have a spherical world defined by simple trigonometric functions to create triangles that are relatively similar in size and shape throughout. What I want to be able to do is use mouse input to target a range of vertices in the area around the mouse click in order to manipulate these vertices in real time. I read a post on this forum regarding translating 3d world coordinates into the 2d viewport.. it recommended that you should multiply the world vector coordinates by the viewport and then the projection, but they didn't include any code examples, and suffice to say i couldn't get any good results. Further information.. I am using a lookat method for the viewport. Does this cause a problem, and if so is there a solution? If this isn't the problem, does anyone have a simple code example illustrating translating one vertex in a 3d world into a 2d viewspace? I am using XNA.

    Read the article

  • Best depth sorting method for a Top Down 2D game using a 3D physics engine

    - by Alic44
    I've spent many days googling this and still have issues with my game engine I'd like to ask about, which I haven't seen addressed before. I think the problem is that my game is an unusual combination of a completely 2D graphical approach using XNA's SpriteBatch, and a completely 3D engine (the amazing BEPU physics engine) with rotation mostly disabled. In essence, my question is similar to this one (the part about "faux 3D"), but the difference is that in my game, the player as well as every other creature is represented by 3D objects, and they can all jump, pick up other objects, and throw them around. What this means is that sorting by one value, such as a Z position (how far north/south a character is on the screen) won't work, because as soon as a smaller creature jumps on top of a larger creature, or a box, and walks backwards, the moment its z value is less than that other creature, it will appear to be behind the object it is actually standing on. I actually originally solved this problem by splitting every object in the game into physics boxes which MUST have a Y height equal to their Z depth. I then based the depth sorting value on the object's y position (how high it is off the ground) PLUS its z position (how far north or south it is on the screen). The problem with this approach is that it requires all moving objects in the game to be split graphically into chunks which match up with a physical box which has its y dimension equal to its z dimension. Which is stupid. So, I got inspired last night to rewrite with a fresh approach. My new method is a little more complex, but I think a little more sane: every object which needs to be sorted by depth in the game exposes the interface IDepthDrawable and is added to a list owned by the DepthDrawer object. IDepthDrawable contains: public interface IDepthDrawable { Rectangle Bounds { get; } //possibly change this to a class if struct copying of the xna Rectangle type becomes an issue DepthDrawShape DepthShape { get; } void Draw(SpriteBatch spriteBatch); } The Bounds Rectangle of each IDepthDrawable object represents the 2D Axis-Aligned Bounding Box it will take up when drawn to the screen. Anything that doesn't intersect the screen will be culled at this stage and the remaining on-screen IDepthDrawables will be Bounds tested for intersections with each other. This is where I get a little less sure of what I'm doing. Each group of collisions will be added to a list or other collection, and each list will sort itself based on its DepthShape property, which will have access to the object-to-be-drawn's physics information. For starting out, lets assume everything in the game is an axis aligned 3D Box shape. Boxes are pretty easy to sort. Something like: if (depthShape1.Back > depthShape2.Front) //if depthShape1 is in front of depthShape2. //depthShape1 goes on top. else if (depthShape1.Bottom > depthShape2.Top) //if depthShape1 is above depthShape2. //depthShape1 goes on top. //if neither of these are true, depthShape2 must be in front or above. So, by sorting draw order by several different factors from the physics engine, I believe I can get a really correct draw order. My question is, is this a good way of going about this, or is there some tried and true, tested way which is completely different and has somehow completely eluded me on the internets? And, if this does seem like a good way to remake my draw order sorting, what's the right sorting algorithm for reordering the Bounds Rectangle collision lists, and how do you deal with a Bounds Rectangle colliding with two different object which don't collide with eachother. I know these are solved problems, but I've only been programming for a year so any specific input here will be greatly appreciated. Thanks for reading this far, ye who made it -- sorry it was so long!

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • Who owns the intellectual property to Fragile Allegiance?

    - by analytik
    Fragile Allegiance was developed by Gremlin Interactive, which was later acquired by Infogrames (Atari). I couldn't find any details of the acquisition though. The only interesting thing I have found online is that the owner of the registered trademark Fragile Allegiance is Interplay, who published Fragile Allegiance. However, the only copyright note I've found was in one installation .ini file, claiming it for Gremlin. What are the common business practices when it comes to old, unused IPs? What do publishers/developers actually need to legally claim an intellectual property? Does anyone have an experience with contacting big publishers with copyright/IP inquiries? Related legal question.

    Read the article

  • Does use of simple shaders improve performace/battery life?

    - by Miro
    I'm making OpenGL game for Android. Till now i've used only fixed function pipeline, but i'm rendering simple things. Fixed function pipeline includes a lot of stuff i don't need. So i'm thinking about implementing shaders in my game to simplify OpenGL pipeline if it can make better performance. Better performance = better battery life, unless fps is limited by software limit, not hardware power.

    Read the article

  • Issue with Mapping Textures to Models in Blender

    - by Passage
    I've been trying to texture a model using Blender, but when I draw on the UV Editor it doesn't show up on the model, and I can't draw on the model itself. I've tried saving the image and the 3D View is set to Texture. Everything seems to be in order and I've followed several tutorials, but none of them seem to work with the version I'm using (2.64-- update was necessary for import plugin) and I'm absolutely stumped. How can I draw textures to the model? If not within Blender itself, how do I export/import the textures? EDIT: Vertex Paint works, though it is insufficient for my purposes. In addition, moving to the rendered view produces a solid-color model with none of the applied textures.

    Read the article

  • What light attenuation function does UDK use?

    - by ananamas
    I'm a big fan of the light attenuation in UDK. Traditionally I've always used the constant-linear-quadratic falloff function to control how "soft" the falloff is, which gives three values to play with. In UDK you can get similar results, but you only need to tweak one value: FalloffExponent. I'm interested in what the actual mathematical function here is. The UDK lighting reference describes it as follows: FalloffExponent: This allows you to modify the falloff of a light. The default falloff is 2. The smaller the number, the sharper the falloff and the more the brightness is maintained until the radius is reached. Does anyone know what it's doing behind the scenes?

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    I'm currently calculating the normal vector of a polygon using this code, but for some faces here and there it calculates a wrong normal. I don't really know what's going on or where it fails but its not reliable. Do you have any polygon normal calculation that's tested and found to be reliable? // calculate normal of a polygon using all points var n:int = points.length; var x:Number = 0; var y:Number = 0; var z:Number = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // turn large values into a unit vector if (length != 0){ x = x / length; y = y / length; z = z / length; }else { throw new Error("Cannot calculate normal since triangle has an area of 0"); }

    Read the article

  • Understanding how to create/use textures for games when limited by power of two sizes

    - by Matthias Reisner
    I have some questions about the creating graphics for a game. As an example. I want to create a motorbike. (1pixel = 1centimeter) So my motorbike will have 200 width and 150 height. (200x150) But the libgdx only allows to load sizes with the power of 2?! (2,4,8,16,...) First I thought about that way. I will create my bike with the size (200x150) and save it as png. Than I will open it again (e.g. with gimp) resize the image to a size which uses values with power of 2 (128x128). I will load that as texture in the programm and set width as 200 and height as 150. But wouldn't it be a problem? Because I will lose some pixel information when I make the first conversation.?! Isn't it?

    Read the article

  • game performance

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • Using XNA for a 2D isometric game, but wanna move on

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • GUI device for throwing a ball

    - by Fredrik Johansson
    The hero has a ball, which shall be thrown with accuracy in a court on iPhone/iPad. The player is seen from above, in a 2D view. In game play, the player reach is between 1/15 and 1/6 of the height of the iPhone screen. The player will run, and try to outmaneuver his opponent, and then throw the ball at a specific location, which is guarded by the opponent (which is also shown on the screen). The player is controlled by a joystick, and that works ok, but how shall I control the stick? Maybe someone can propose a third control method? I've tried the following two approaches: Joystick: Hero has a reach of 1 meter, and this reach is marked with a semi-opaque circle around the player. The ball can be moved by a joystick. When the joystick is moved south, the ball is moved south within the reach circle. There is a direct coupling with the joystick and the position of the ball. I.e. when the joystick is moved max south, the ball is max south within the player reach. At each touch update the speed is calculated, and the Box2d ball position and ball speed are updated. NB, the ball will never be moved outside the reach as long as the player push the joystick. The ball is thrown by swiping the joystick to make the ball move, and then releasing the joystick. At release, the ball will get a smoothed speed of the joystick. Joystick Problem: The throwing accuracy gets bad, because the joystick can not be that big, and a small movement results in quite a large movement of the ball. If the user does not release before the end of the joystick maximum end point, the ball will stop, and when the user releases the joystick the speed of the ball will be zero. Bad... Touch pad A force is applied to the ball by a sweep on a touchpad. The ball is released when the sweep is ended, or when the ball is moved outside the player reach. As there is no one to one mapping between the swipe and the ball position, the precision can be improved. A large swipe can result in a small ball movement. Touch Pad Problem A touchpad is less intuitive. Users do not seem to know what to do with the touch pad. Some tap the touchpad, and then the ball just falls to the ground. As there is no one-to-one mapping, the ball can be moved outside the reach, and then it will just fall to the ground. It's a bit hard to control the ball, especially if the player also moves.

    Read the article

  • How to stop camera from rotating in 2.5d platformer

    - by Artem Suchkov
    I'm stuck with a problem: I can not make my camera stop rotating after character. What I already have tried: using empty game object with rigid body and locked rotation and make it parent of camera, while player being the parent of object. Also, I've tried using few scripts from web, that did not help. Right now I'm bad with using JS in Unity (can handle JS on website, but I dont know how to integrate it for now) and practicing the basics, making easy 2.5d platformer with basic features, so I can not write code for now.

    Read the article

  • How do I make my rain effect look more like rain and less like snowfall?

    - by Nikhil Lamba
    I am making a game in that game I want a rain effect. I am little bit far from this right now. I am creating the rain effect like below: particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But it looks like snowfall! What changes can I do for it to look more like rain? EDIT Here is a screenshot:

    Read the article

  • Pygame set_colorkey transparency issues

    - by Nathan Chowning
    I'm having a strange issue that I cannot seem to remedy. I am doing some prototyping with Pygame on a desktop running windows and a laptop running OS X. Both are running python v2.7.3 (installed via homebrew for the Macbook) and pygame v1.9.1. For transparency, I have been using set_colorkey with a transparency color of (255, 0, 255). Here is the applicable code: transColor = pygame.Color(255, 0, 255) image = pygame.image.load(playerPath + "idle.png").convert() image.set_colorkey(transColor) This works flawlessly on my windows machine. On my laptop, it does not work. It just shows the hideous magenta color. Here's the strange part. If I change the transColor to (0, 0, 0), all black pixels in my images are transparent. Has anyone run into this issue before?

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • What are some good examples of exuberant in-game instructions for telling the player to repeatedly smash a button?

    - by Michael
    What are some good examples of exuberant in-game instructions for telling the player to repeatedly and quickly press a button or perform an action? I'm especially interested in examples in retro games (e.g., from the NES, SNES, and 1980-90s arcade eras), and I would love to see examples with text, graphics, or both. To illustrate, here are a few examples of the type of instructions that I'm thinking of: Smash the A button to lift something heavy! Toggle the joystick back and forth to break free! Quickly press the button to build power in a meter! I'm working on a 2D iOS game with retro-style pixel art, and there's a point where I want the player to quickly tap on a sprite to complete an action. I have a serviceable starting point -- the word "TAP" flashing with an arrow repeatedly moving downward beneath it: But it still doesn't feel quite right. I would love to see some actual examples from the golden days of 2D gaming to use as reference material. I know examples abound, but I'm just struggling to think of any concrete ones at the moment. Can you think of any examples of this type of thing in old games?

    Read the article

  • Does Unity's "Transparent Bumped Specular" translate to "semi-shiny must be semi-transparent"?

    - by Shivan Dragon
    Unity's documentation for the "Transparent Bumped Specular" shader/material-type is simply a concatenation of each of the descriptions for its Transparent and Specular Shaders (and also Bumped, but that doesn't apply to the question): Transparent Properties This shader can make mesh geometry partially or fully transparent by reading the alpha channel of the main texture. In the alpha, 0 (black) is completely transparent while 255 (white) is completely opaque. If your main texture does not have an alpha channel, the object will appear completely opaque. (...) Specular Properties (...) Additionally, the alpha channel of the main texture acts as a Specular Map (sometimes called "gloss map"), defining which areas of the object are more reflective than others. Black areas of the alpha will be zero specular reflection, while white areas will be full specular reflection. To me this translates to: I have a mesh representig a car tire The texture need to be very shiny on the rims parts, and almost not shiny at all for the rubber parts Also since the rim is really complex, (with like cut-out decoretions and such), I will not build that into the mesh, but fake it with transparency in the texture I can't do all this using Unity's "Transparent Bumped Specular" shader, because the "rubber" part of the texture will become semi transparent due to me painting the alpha channel dark-grey (because I want it to also be less shiny). Is this correct? If not, how can I make this work?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >