Search Results

Search found 1308 results on 53 pages for 'texture'.

Page 34/53 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Determine arc-length of a Catmull-Rom spline

    - by Wouter
    I have a path that is defined by a concatenation of Catmull-Rom splines. I use the static method Vector2.CatmullRom in XNA that allows for interpolation between points with a value going from 0 to 1. Not every spline in this path has the same length. This causes speed differences if I let the weight go at a constant speed for every spline while proceeding along the path. I can remedy this by letting the speed of the weight be dependent on the length of the spline. How can I determine the length of such a spline? Should I just approximate by cutting the spline into 10 straight lines and sum their lengths? I'm using this for dynamic texture mapping on a generated mesh defined by splines.

    Read the article

  • XNA Notes 011

    - by George Clingerman
    Even with a lot of the XNA community working on Dream Build Play entries ( I swear I’m going to finish mine this year!) people are still finding time to do side projects and be amazingly active in the XNA and XBLIG community. With my one eye on my code and one eye on the community, here’s what I noticed these over achievers doing this past week! Time Critical XNA News: Xbox LIVE Indie Games sales data will be delayed March 17-20th due to some schedule maintenance http://create.msdn.com/en-us/news/indie_games_data_delay_march2011 GameMarx is releasing a series of videos to help raise donations for victims of the earthquakes and tsunami in Japan. Help out if you can! http://www.gamemarx.com/video/special/29/help-japan-sushido.aspx XNA MVPs: Catalin Zima shares his thoughts on the MVP summit and my book! http://www.catalinzima.com/2011/03/mvp-summit-2011/ Glenn Wilson (@mykre) helps the XNA team announce some new educational content that you don’t want to miss if you’re porting your app or game to Windows Phone 7 http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/653/Porting-your-App-or-Game-to-Windows-Phone-7.aspx and Windows Phone 7 from scratch http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/654/Windows-Phone-from-Scratch.aspx and shares a link to some free architectural models and textures http://twitter.com/#!/Mykre/status/46410160784158720 George (that’s me!) shares his MVP Summit 2011 summary and XBLIG thoughts http://geekswithblogs.net/clingermangw/archive/2011/03/15/144366.aspx XNA Developers: @SmallCaveGames shares a Code of Ethics for Xbox LIVE Indie Game Developers http://smallcavegames.blogspot.com/2011/03/unofficial-xblig-developers-code-of.html Derek S adds more Xbox LIVE Indie Game studios to his master list of XBLIG links http://twitter.com/#!/Mr_Deeke/status/46140996056125440 http://xbl-indieverse.blogspot.com/p/xblig-links.html Making games and want to help kids? Then share your story with GameFace: America! http://gameitupinitiative.com/about-the-initiative/programs/gameface-america/ Xbox LIVE Indie Games (XBLIG): XonaGames shares some video footage of their booth from GDC 2011 Video 1: http://youtu.be/lxIV9nk3Gq4 Video 2: http://youtu.be/GgfrjqkxR_o Video 3: http://youtu.be/yVcpXrTX7SQ Joystiq on Mommy’s Best Games Serious Sam Double D http://www.joystiq.com/2011/03/16/the-most-important-thing-about-serious-sam-double-d/ And The Escapist recommends that gamers start learning to avoid cleavage now http://www.escapistmagazine.com/news/view/108543-Boobie-Bomber-Makes-First-Appearance-in-Serious-Sam-Double-D Magiko Gaming started a blog on the XBLIG dashboard daily Top 10 games in the US. Good way to go back in time and look at the history of which games were in the the Top 10. http://dailytop10indiegames.wordpress.com/ Where are they going now? XBLIG developers at a crossroads.. http://www.gamesetwatch.com/2011/03/where_are_they_going_now_xblig.php http://www.gamasutra.com/view/news/33527/InDepth_Where_Are_They_Going_Now_XBLIG_Developers_At_A_Crossroads_.php BinaryTweed’s Clover: A Curious Tail is Xbox LIVE’s Deal of the Week! http://www.armlessoctopus.com/2011/03/15/what-luck-clover-a-curious-tale-is-half-price-this-week/ Looking for an Xbox LIVE Indie Game to buy? Writings of Mass Deduction has over 125 suggestions at this point! http://writingsofmassdeduction.com/ SkaStudios shares Vampire Smile Achievements AND their PAX East 2011 Both Setup video http://www.ska-studios.com/2011/03/14/vampire-smile-achievement/ http://www.ska-studios.com/2011/03/15/pax-booth-setup-time-lapse/ MasterBlud and VVGTV starts a new community for XBLIG developers and gamers to join http://vvgtv.forumotion.com/ Raymond Matthews (@DrakstarMatryx) covers Mommy’s Best Games getting Serious http://www.darkstarmatryx.com/?p=286 XNA Development: Dave Henry (@mort8088) posts the 4th tutorial in his series XNA 4.0 SpriteBatch extended http://mort8088.com/2011/03/11/xna-4-0-tutorial-4-spritebatch-extended/ Tutorial 5 - Creating a manual blank texture http://mort8088.com/2011/03/13/xna-4-tutorial-5-manual-blank-texture/ XNA 4.0 Tutorial 6 - Spritesheet Object http://mort8088.com/2011/03/18/xna-4-0-tutorial-6-spritesheet-object/ Jason Mitchell shares a tutorial on setting the alpha value for spritebatch in XNA 4.0 http://www.jason-mitchell.com/index.php/2011/03/13/setting-alpha-value-for-spritebatch-draw-in-xna-4/ XNA for Silverlight Developers: Part 7 - Collision Detection http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-7-Collision-detection.aspx Markus Ewald (@Cygon4) shares the full Ninject 2.0 binding for XNA and Sunburn http://twitter.com/#!/Cygon4/status/48330203826622464 Michael B. McLaughlin shares an AccelerometerInput XNA GameComponent he created (which I’m probably going to snag for a game I’m working on...) http://geekswithblogs.net/mikebmcl/archive/2011/03/17/accelerometerinput-xna-gamecomponent.aspx Extra Credit tackles the building of a good tutorial. Must watch for all Indie game devs (thanks for pointing it out Evan Johnson!) http://twitter.com/#!/johnsonevan/status/48452115680604160 http://www.escapistmagazine.com/videos/view/extra-credits/2921-Tutorials-101 ExEn is fully funded at this point so definitely something for XBLIG developers to keep an eye on as they consider releasing their games on other platforms http://rockethub.com/projects/752-exen-xna-for-iphone-android-and-silverlight Channel 9 and Greg Duncan post Mixing the Game State Management and Platformer XNA Recipes http://channel9.msdn.com/coding4fun/blog/Mixing-the-Game-State-Management-and-Platformer-XNA-Recipes Sgt. Conker has noticed Mike McLaughlin has been crazy productive and has done a recap of his recent posts http://www.sgtconker.com/2011/03/recap-of-mikebmcls-posts/

    Read the article

  • Google I/O 2011: 3D Graphics on Android: Lessons learned from Google Body

    Google I/O 2011: 3D Graphics on Android: Lessons learned from Google Body Nico Weber Google originally built Google Body, a 3D application that renders the human body in incredible detail, for WebGL-capable browsers running on high-end bPCs. To bring the app to Android at a high resolution and frame rate, Nico Weber and Won Chun had a close encounter with Android's graphics stack. In this session Nico will present their findings as best practices for high-end 3D graphics using OpenGL ES 2.0 on Android. The covered topics range from getting accelerated pixels on the screen to fast resource loading, performance guidelines, texture compression, mipmapping, recommended vertex attribute formats, and shader handling. The talk also touches on related topics such as SDK vs NDK, picking, and resource loading. From: GoogleDevelopers Views: 6077 29 ratings Time: 56:09 More in Science & Technology

    Read the article

  • What's a good way to organize samplers for HLSL?

    - by Rei Miyasaka
    According to MSDN, I can have 4096 samplers per context. That's a lot, considering there's only a handful of common sampler states. That tempts me to initialize an array containing a whole bunch of common sampler states, assign them to every device context I use, and then in the pixel shaders refer to them by index using : register(s[n]) where n is the index in the array. If I want more samplers for whatever reason, I can just add them on after the last slot. Does this work? If not, when should I set the samplers? Should it be done when by the mesh renderer? The texture renderer? Or alongside PSSetShader? Edit: That trick I wrote above doesn't work (at least not yet), as the compiler gives me this error message when I try to use the same register twice: error X4500: overlapping register semantics not yet implemented 's0' So how do people usually organize samplers, then?

    Read the article

  • Implementing algorithms via compute shaders vs. pipeline shaders

    - by TravisG
    With the availability of compute shaders for both DirectX and OpenGL it's now possible to implement many algorithms without going through the rasterization pipeline and instead use general purpose computing on the GPU to solve the problem. For some algorithms this seems to become the intuitive canonical solution because they're inherently not rasterization based, and rasterization-based shaders seemed to be a workaround to harness GPU power (simple example: creating a noise texture. No quad needs to be rasterized here). Given an algorithm that can be implemented both ways, are there general (potential) performance benefits over using compute shaders vs. going the normal route? Are there drawbacks that we should watch out for (for example, is there some kind of unusual overhead to switching from/to compute shaders at runtime)? Are there perhaps other benefits or drawbacks to consider when choosing between the two?

    Read the article

  • OpenGL ES 2.0: Vertex and Fragment Shader for 2D with Transparency

    - by Bunkai.Satori
    Could I knindly ask for correct examples of OpenGL ES 2.0 Vertex and Fragment shader for displaying 2D textured sprites with transparency? I have fairly simple shaders that display textured polygon pairs but transparency is not applied despite: texture map contains transparency information Blending is enabled: glEnable(GL_BLEND); glEnable(GL_DEPTH_TEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); My Vertex Shader: uniform mat4 uOrthoProjection; uniform vec3 Translation; attribute vec4 Position; attribute vec2 TextureCoord; varying vec2 TextureCoordOut; void main() { gl_Position = uOrthoProjection * (Position + vec4(Translation, 0)); TextureCoordOut = TextureCoord; } My Fragment Shader: varying mediump vec2 TextureCoordOut; uniform sampler2D Sampler; void main() { gl_FragColor = texture2D(Sampler, TextureCoordOut); }

    Read the article

  • Modelling photo-realistic grass in realtime

    - by sebf
    Hello, I see a number of tutorials on how to create good looking grasses when creating 3D renders but can't think how to model it for realtime/use in a game's scenery. Sure simple models with alpha cutouts can be used to create plants and trees in really awesome scenery but what about a lawn? Are there any good tricks to achieve this effect? I tried with a simple 4 sided box and a small texture and the number of objects needed for a decent appearance made Max crawl to a halt. (I am thinking it may be possible with a shader but that is a whole other area so thought I would just ask about anyones experience with modelling it here) Thanks!

    Read the article

  • Isometric - precise screen coordinates to isometric

    - by Rawrz
    I'm trying to translate mouse coords to precise isometric coords (I can already find the tile the mouse is over, but I want it to be more precise). I've tried several different methods but I seem to keep falling short. For drawing I use: batch.draw( texture, (y * tileWidth / 2) + (x * tileWidth / 2), (x * tileHeight / 2) - (y * tileHeight / 2)) This is what I currently use for figuring out a tile position: float xt = x + camPosition.x - (ScreenWidth/2) ; float yt = (ScreenHeight) - y + camPosition.y - (ScreenHeight/2); int tileY = Math.round((((xt) / tileWidth) - ((yt) / tileHeight))); int tileX = Math.round((((xt) / tileWidth) + ((yt) / tileHeight))- 1); I'm just wondering how I could update these to allow for more precise coordinates, instead of tile only. EDIT: Following what ccxvii said below, and removing the -1 from tileX, the object follows my mouse just like I had wanted. Just going to re-examine the math and figure out if that change will result in other messes =o

    Read the article

  • Incorrect colour blending when using a pixel shader with XNA

    - by MazK
    I'm using XNA 4.0 to create a 2D game and while implementing a layer tinting pixel shader I noticed that when the texture's alpha value is anything between 1 or 0 the end result is different than expected. The tinting works from selecting a colour and setting the amount of tint. This is achieved via the shader which works out first the starting colour (for each r, g, b and a) : float red = texCoord.r * vertexColour.r; and then the final tinted colour : output.r = red + (tintColour.r - red) * tintAmount; The alpha value isn't tinted and is left as : output.a = texCoord.a * vertexColour.a; The picture in the link below shows different backdrops against an energy ball object where it's outer glow hasn't blended as I would like it to. The middle two are incorrect as the second non tinted one should not show a glow against a white BG and the third should be entirely invisible. The blending function is NonPremultiplied. Why the alpha value is interfering with the final colour?

    Read the article

  • Can GMod/SFM models be converted to Unity GameObjects?

    - by Supuhstar
    Someone made a suite of GMod/SFM models available for free for people making games and videos in GMod and SFM. These are of type .dmx, .dx80.vtx, .dx90.vtx, .mdl, .phy, .sw.vtx, .vvd, .vmt, and .vtf. I fon't use GMod or SFM, so I don't know what these are, thus making it hard for me to manually convert them. Is there any way to change these into files Unity can recognize and use? I'd like to have an easy step from converting them, but I would also accept instructions on how to export them to generic mesh/skeleton/texture files, and then how to import and combine these in Unity.

    Read the article

  • Softbody with complex geometry

    - by philipp
    I have modeled an Handball, based on the tutorial here, with a custom texture. Now I am trying to animate this model with the reactor module as a soft body. Therefor I have watched and tried a lot of tutorials and for animating a simple Sphere everything works fine. But if i try to use the model I have created, than it results in the crash of max or an animation that shows a crystal like structure that transforms itself to another crystal. Is it possible to animate this kind of complex geometry as a soft body and am i just setting the values wrong? If yes, which are the important ones I should check? Thanks in advance! Greetings philipp

    Read the article

  • openGL migration from SFML to glut, vertices arrays or display lists are not displayed

    - by user3714670
    Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window. With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it). Here is the code to initialise the window : int type; int stereoMode = 0; if ( stereoMode == 0 ) type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH; else type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO; glutInitDisplayMode(type); int argc = 0; char *argv = ""; glewExperimental = GL_TRUE; glutInit(&argc, &argv); bool fullscreen = false; glutInitWindowSize(width,height); int win = glutCreateWindow(title.c_str()); glutSetWindow(win); assert(win != 0); if ( fullscreen ) { glutFullScreen(); width = glutGet(GLUT_SCREEN_WIDTH); height = glutGet(GLUT_SCREEN_HEIGHT); } GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); } glutDisplayFunc(loop_function); This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(255.0f, 255.0f, 255.0f, 0.0f); glLoadIdentity(); sf::Time elapsed_time = clock.getElapsedTime(); clock.restart(); camera->animate(elapsed_time.asMilliseconds()); camera->look(); for (auto i = objects->cbegin(); i != objects->cend(); ++i) (*i)->draw(camera); glutSwapBuffers(); Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject. In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related. EDIT : Here is the code i use to draw each object, maybe it is the problem : GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular"); if(LightID ==-1) cout << "LightID not found ..." << endl; glm::vec3 lightPos = glm::vec3(0,150,150); glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z); glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z); glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z); // Get a handle for our "myTextureSampler" uniform GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; glActiveTexture(GL_TEXTURE0); sf::Texture::bind(texture); glUniform1i(TextureID, 0); // 2nd attribute buffer : UVs GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color"); if(vertexUVID==-1) cout << "vertexUVID not found ..." << endl; glEnableVertexAttribArray(vertexUVID); glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer); glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0); GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal"); if(!vertexNormal_modelspaceID) cout << "vertexNormal_modelspaceID not found ..." << endl; glEnableVertexAttribArray(vertexNormal_modelspaceID); glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer); glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 ); GLint posAttrib; posAttrib = glGetAttribLocation(shaderProgram, "position"); if(!posAttrib) cout << "posAttrib not found ..." << endl; glEnableVertexAttribArray(posAttrib); glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer); glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0); GLuint error; while ((error = glGetError()) != GL_NO_ERROR) { cerr << "OpenGL error: " << error << endl; } disableShaders();

    Read the article

  • Blender Object Appearing Gray when all Lights are Off

    - by celestialorb
    I have an issue with Blender where, when I turn my only light off (a sun lamp) and render the image my object appears gray rather than black (and thus, not appear to the camera). I can't figure out why this is happening. Here's what I just did in my scene: Added a new UV Sphere mesh (to make a total of two spheres), made it visible to the camera, turned off the sun lamp (by setting energy to 0), and rendered. The result I obtained is below. I discovered this when attempting to render the first sphere with a material/texture on it and it was too bright. The material on the spheres (which are different) are very basic, there's no emit, diffuse and specular are at default values. Could there be an issue with the way my camera is setup? Thanks in advance!

    Read the article

  • Exporting SWF With Transparent Background For Scaleform/UDK

    - by Alex Shepard
    After looking all over in the UDN and forums I have yet to find a solution for this: I am currently using Flash CS3 and Actionscript 2.0 to build my scaleform menus and I can use them in the UDK. For various reasons I can't use the handy plugin Autodesk supplies to enable this export so I publish my flash documents to swf the old fassioned way and manually use the gfxexport.exe tool to get my .gfx file. I can then import into the UDK the normal way. My problem is that the flash movies that I import will not alpha blend even if the material is set to blend in the alpha channel of the target render texture. My project images are set up to export properly. My classpath for Actionscript 2.0 is set to the correct location. My HTML publish settings have window mode set to Transparent Windowless. Is it possible to export without the scaleform flash extension and still get the desired effects and if so how might I do so? Am I merely missing something from my project setup?

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • Basic shadow mapping fails on NVIDIA card?

    - by James
    Recently I switched from an AMD Radeon HD 6870 card to an (MSI) NVIDIA GTX 670 for performance reasons. I found however that my implementation of shadow mapping in all my applications failed. In a very simple shadow POC project the problem appears to be that the scene being drawn never results in a draw to the depth map and as a result the entire depth map is just infinity, 1.0 (Reading directly from the depth component after draw (glReadPixels) shows every pixel is infinity (1.0), replacing the depth comparison in the shader with a comparison of the depth from the shadow map with 1.0 shadows the entire scene, and writing random values to the depth map and then not calling glClear(GL_DEPTH_BUFFER_BIT) results in a random noisy pattern on the scene elements - from which we can infer that the uploading of the depth texture and comparison within the shader are functioning perfectly.) Since the problem appears almost certainly to be in the depth render, this is the code for that: const int s_res = 1024; GLuint shadowMap_tex; GLuint shadowMap_prog; GLint sm_attr_coord3d; GLint sm_uniform_mvp; GLuint fbo_handle; GLuint renderBuffer; bool isMappingShad = false; //The scene consists of a plane with box above it GLfloat scene[] = { -10.0, 0.0, -10.0, 0.5, 0.0, 10.0, 0.0, -10.0, 1.0, 0.0, 10.0, 0.0, 10.0, 1.0, 0.5, -10.0, 0.0, -10.0, 0.5, 0.0, -10.0, 0.0, 10.0, 0.5, 0.5, 10.0, 0.0, 10.0, 1.0, 0.5, ... }; //Initialize the stuff used by the shadow map generator int initShadowMap() { //Initialize the shadowMap shader program if (create_program("shadow.v.glsl", "shadow.f.glsl", shadowMap_prog) != 1) return -1; const char* attribute_name = "coord3d"; sm_attr_coord3d = glGetAttribLocation(shadowMap_prog, attribute_name); if (sm_attr_coord3d == -1) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } const char* uniform_name = "mvp"; sm_uniform_mvp = glGetUniformLocation(shadowMap_prog, uniform_name); if (sm_uniform_mvp == -1) { fprintf(stderr, "Could not bind uniform %s\n", uniform_name); return 0; } //Create a framebuffer glGenFramebuffers(1, &fbo_handle); glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); //Create render buffer glGenRenderbuffers(1, &renderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); //Setup the shadow texture glGenTextures(1, &shadowMap_tex); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, s_res, s_res, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return 0; } //Delete stuff void dnitShadowMap() { //Delete everything glDeleteFramebuffers(1, &fbo_handle); glDeleteRenderbuffers(1, &renderBuffer); glDeleteTextures(1, &shadowMap_tex); glDeleteProgram(shadowMap_prog); } int loadSMap() { //Bind MVP stuff glm::mat4 view = glm::lookAt(glm::vec3(10.0, 10.0, 5.0), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::ortho<float>(-10,10,-8,8,-10,40); glm::mat4 mvp = projection * view; glm::mat4 biasMatrix( 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); glm::mat4 lsMVP = biasMatrix * mvp; //Upload light source matrix to the main shader programs glUniformMatrix4fv(uniform_ls_mvp, 1, GL_FALSE, glm::value_ptr(lsMVP)); glUseProgram(shadowMap_prog); glUniformMatrix4fv(sm_uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); //Draw to the framebuffer (with depth buffer only draw) glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap_tex, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum result = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (GL_FRAMEBUFFER_COMPLETE != result) { printf("ERROR: Framebuffer is not complete.\n"); return -1; } //Draw shadow scene printf("Creating shadow buffers..\n"); int ticks = SDL_GetTicks(); glClear(GL_DEPTH_BUFFER_BIT); //Wipe the depth buffer glViewport(0, 0, s_res, s_res); isMappingShad = true; //DRAW glEnableVertexAttribArray(sm_attr_coord3d); glVertexAttribPointer(sm_attr_coord3d, 3, GL_FLOAT, GL_FALSE, 5*4, scene); glDrawArrays(GL_TRIANGLES, 0, 14*3); glDisableVertexAttribArray(sm_attr_coord3d); isMappingShad = false; glBindFramebuffer(GL_FRAMEBUFFER, 0); printf("Render Sbuf in %dms (GLerr: %d)\n", SDL_GetTicks() - ticks, glGetError()); return 0; } This is the full code for the POC shadow mapping project (C++) (Requires SDL 1.2, SDL-image 1.2, GLEW (1.5) and GLM development headers.) initShadowMap is called, followed by loadSMap, the scene is drawn from the camera POV and then dnitShadowMap is called. I followed this tutorial originally (Along with another more comprehensive tutorial which has disappeared as this guy re-configured his site but used to be here (404).) I've ensured that the scene is visible (as can be seen within the full project) to the light source (which uses an orthogonal projection matrix.) Shader utilities function fine in non-shadow-mapped projects. I should also note that at no point is the GL error state set. What am I doing wrong here and why did this not cause problems on my AMD card? (System: Ubuntu 12.04, Linux 3.2.0-49-generic, 64 bit, with the nvidia-experimental-310 driver package. All other games are functioning fine so it's most likely not a card/driver issue.)

    Read the article

  • Anti-aliasing works for debug runtime but not retail runtime

    - by DeadMG
    I'm experimenting with setting various graphical settings in my Direct3D9 application, and I'm currently facing a curious problem with anti-aliasing. When running under the debug runtime, AA works as expected, and I don't have any errors or warnings. But when running under the retail runtime, the image isn't anti-aliased at all. I don't get any errors, the device creates and executes just fine. As I honestly have little idea where the problem is, I will simply give a relatively high-level overview of the architecture involved, rather than specific problematic code. Simply put, I render my 3D content to a texture, which I then render to the back buffer. Any suggestions as to where to look?

    Read the article

  • Sprite sheets, Clamp or Wrap?

    - by David
    I'm using a combination of sprite sheets for well, sprites and individual textures for infinite tiling. For the tiling textures I'm obviously using Wrap to draw the entire surface in one call but up until now I've been making a seperate batch using Clamp for drawing sprites from the sprite sheets. The sprite sheets include a border (repeating the edge pixels of each sprite) and my code uses the correct source coordinates for sprites. But since I'm never giving coordinates outside of the texture when drawing sprites (and indeed the border exists to prevent bleed over when filtering) it's struck me that I'd be better off just using Wrap so that I can combine everything into one batch. I just want to be sure that I haven't overlooked something obvious. Is there any reason that Wrap would be harmful when used with a sprite sheet?

    Read the article

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • Render 2 images that uses different shaders

    - by Code Vader
    Based on the giawa/nehe tutorials, how can I render 2 images with different shaders. I'm pretty new to OpenGl and shaders so I'm not completely sure whats happening in my code, but I think the shaders that is called last overwrites the first one. private static void OnRenderFrame() { // calculate how much time has elapsed since the last frame watch.Stop(); float deltaTime = (float)watch.ElapsedTicks / System.Diagnostics.Stopwatch.Frequency; watch.Restart(); // use the deltaTime to adjust the angle of the cube angle += deltaTime; // set up the OpenGL viewport and clear both the color and depth bits Gl.Viewport(0, 0, width, height); Gl.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); // use our shader program and bind the crate texture Gl.UseProgram(program); //<<<<<<<<<<<< TOP PYRAMID // set the transformation of the top_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(top_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(top_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(top_pyramidUV, program, "vertexUV"); Gl.BindBuffer(top_pyramidTrianlges); // draw the textured top_pyramid Gl.DrawElements(BeginMode.Triangles, top_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<< CUBE // set the transformation of the cube program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(cube, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(cubeNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(cubeUV, program, "vertexUV"); Gl.BindBuffer(cubeQuads); // draw the textured cube Gl.DrawElements(BeginMode.Quads, cubeQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<< BOTTOM PYRAMID // set the transformation of the bottom_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(bottom_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(bottom_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(bottom_pyramidUV, program, "vertexUV"); Gl.BindBuffer(bottom_pyramidTrianlges); // draw the textured bottom_pyramid Gl.DrawElements(BeginMode.Triangles, bottom_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<<< STAR Gl.Disable(EnableCap.DepthTest); Gl.Enable(EnableCap.Blend); Gl.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One); Gl.BindTexture(starTexture); //calculate the camera position using some fancy polar co-ordinates Vector3 position = 20 * new Vector3(Math.Cos(phi) * Math.Sin(theta), Math.Cos(theta), Math.Sin(phi) * Math.Sin(theta)); Vector3 upVector = ((theta % (Math.PI * 2)) > Math.PI) ? Vector3.Up : Vector3.Down; program_2["view_matrix"].SetValue(Matrix4.LookAt(position, Vector3.Zero, upVector)); // make sure the shader program and texture are being used Gl.UseProgram(program_2); // loop through the stars, drawing each one for (int i = 0; i < stars.Count; i++) { // set the position and color of this star program_2["model_matrix"].SetValue(Matrix4.CreateTranslation(new Vector3(stars[i].dist, 0, 0)) * Matrix4.CreateRotationZ(stars[i].angle)); program_2["color"].SetValue(stars[i].color); Gl.BindBufferToShaderAttribute(star, program_2, "vertexPosition"); Gl.BindBufferToShaderAttribute(starUV, program_2, "vertexUV"); Gl.BindBuffer(starQuads); Gl.DrawElements(BeginMode.Quads, starQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); // update the position of the star stars[i].angle += (float)i / stars.Count * deltaTime * 2 * rotate_stars; stars[i].dist -= 0.2f * deltaTime * rotate_stars; // if we've reached the center then move this star outwards and give it a new color if (stars[i].dist < 0f) { stars[i].dist += 5f; stars[i].color = new Vector3(generator.NextDouble(), generator.NextDouble(), generator.NextDouble()); } } Glut.glutSwapBuffers(); } The same goes for the textures, whichever one I mention last gets applied to both object?

    Read the article

  • Ios Game with many animated Nodes,performance issues

    - by user31929
    I'm working in a large map upside-down game(not tiled map),the map i use is a city. I have to insert many node to create the "life of the city",something like people that cross the streets,cars,etc... Some of this characters are involved in physics and game logic but others are only graphic characters. For what i know the only way i can achive this result is to create each character node with or without physic body and animate each character with a texture atlas. In this way i think that i'll have many performance problems, (the characters will be something like 100/150) even if i'll apply all the performance tips that i know... My question is: with large numbers of characters there another programming pattern that i must follow ? What is the approch of game like simcity,simpsons tapped out for ios,etc... that have so many animation at the same time?

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • Doing an SNES Mode 7 (affine transform) effect in pygame

    - by 2D_Guy
    Is there such a thing as a short answer on how to do a Mode 7 / mario kart type effect in pygame? I have googled extensively, all the docs I can come up with are dozens of pages in other languages (asm, c) with lots of strange-looking equations and such. Ideally, I would like to find something explained more in English than in mathematical terms. I can use PIL or pygame to manipulate the image/texture, or whatever else is necessary. I would really like to achieve a mode 7 effect in pygame, but I seem close to my wit's end. Help would be greatly appreciated. Any and all resources or explanations you can provide would be fantastic, even if they're not as simple as I'd like them to be. If I can figure it out, I'll write a definitive how to do mode 7 for newbies page. edit: mode 7 doc: http://www.coranac.com/tonc/text/mode7.htm

    Read the article

  • Drawing a textured triangle with CPU instead of GPU

    - by Jenko
    I understand the benefits of GPU rendering and such, but for a certain limited application I need to render textured triangles purely using CPU. I've built a 3D engine capable of object handling, transform, projection, culling and the likes ... now all I need is a little code snippet that draws a single textured triangle onto a bitmap... any language accepted! Inputs: Texture bitmap, Triangle U/V/W coords, Triangle X/Y screen coords Output: The textured triangle drawn at the given screen coords I've currently been using a platform function to draw triangles to screen, but I'm looking to handle it myself to speeden up the process.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >