Search Results

Search found 71086 results on 2844 pages for 'rendering problem'.

Page 15/2844 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • blurry image rendered

    - by Jason
    I'm using Direct2D to render a PNG image using a ID2D1BitmapRenderTarget and then caling it's GetBitmap() function and rendering the image using ID2D1HwndRenderTarget::DrawBitmap(). Some of the images rendered this way are clear but others appear blurry. I did some research and followed a tutorial to make my application "DPI Aware" but it didn't help. What could cause the rendered image to appear blurry? Has anyone experienced this issue before? What can I do about this?

    Read the article

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • spinning a 2d Cube

    - by Rahul Verma
    I know that a cube is actually a 3d shape , but i have some other problem over here. I have been doing 2D Game dev using libgdx but have never touched 3D rendering. Now what I want in my 2D game is that instead of coins I make my player collect magical cubes. But those cubes need to be spinning on one Diagonal, same can be seen in popular game Vector. Here is a screenshot. Can someone explaing the mathematics of such an animation

    Read the article

  • Render an image with layers for shadows /reflections, object and ground in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on the ground in the center. This object has shadows and reflections on the ground. How can I render an image containing 3 separate layers for The object The ground The reflection / shadow on the ground Which format do I use for this? (It should include all 3 layers + I should be able to enable/disable them in Photoshop) How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Are there any preexisting maps for a Minecraft-like level I could use in my engine?

    - by Rishav Sharan
    I am working on a tiny cube-based engine like Minecraft. I was wondering if there is a way for me to get large blocky terrain in a text format that I can use for rendering on my engine? I don't want to start on procedural generation now, I just want a resource where I can get the coord list for a pretty looking terrain. Alternatively, is it possible for me to parse the Minecraft world files and use that data to generate terrain/buildings in my code?

    Read the article

  • OpenGL position from depth is wrong

    - by CoffeeandCode
    My engine is currently implemented using a deferred rendering technique, and today I decided to change it up a bit. First I was storing 5 textures as so: DEPTH24_STENCIL8 - Depth and stencil RGBA32F - Position RGBA10_A2 - Normals RGBA8 x 2 - Specular & Diffuse I decided to minimize it and reconstruct positions from the depth buffer. Trying to figure out what is wrong with my method currently has not been fun :/ Currently I get this: which changes whenever I move the camera... weird Vertex shader really simple #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment shader Where the fun (and not so fun) stuff happens #version 150 uniform sampler2D depth_tex; uniform sampler2D normal_tex; uniform sampler2D diffuse_tex; uniform sampler2D specular_tex; uniform mat4 inv_proj_mat; uniform vec2 nearz_farz; in vec2 uv_f; ... other uniforms and such ... layout(location = 3) out vec4 PostProcess; vec3 reconstruct_pos(){ float z = texture(depth_tex, uv_f).x; vec4 sPos = vec4(uv_f * 2.0 - 1.0, z, 1.0); sPos = inv_proj_mat * sPos; return (sPos.xyz / sPos.w); } void main(){ vec3 pos = reconstruct_pos(); vec3 normal = texture(normal_tex, uv_f).rgb; vec3 diffuse = texture(diffuse_tex, uv_f).rgb; vec4 specular = texture(specular_tex, uv_f); ... do lighting ... PostProcess = vec4(pos, 1.0); // Just for testing } Rendering code probably nothing wrong here, seeing as though it always worked before this->gbuffer->bind(); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Enable(gl::DEPTH_TEST); gl::Enable(gl::CULL_FACE); ... bind geometry shader and draw models and shiz ... gl::Disable(gl::DEPTH_TEST); gl::Disable(gl::CULL_FACE); gl::Enable(gl::BLEND); ... bind textures and lighting shaders shown above then draw each light ... gl::BindFramebuffer(gl::FRAMEBUFFER, 0); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Disable(gl::BLEND); ... bind screen shaders and draw quad with PostProcess texture ... Rinse_and_repeat(); // not actually a function ;) Why are my positions being output like they are?

    Read the article

  • Need help drawings planets in Java.

    - by d33j
    I am looking for help/links/notes/agorithms/URLs/examples on drawing/rendering spheres in pure Java (so that I can hopefully, one day, generate/render planets with various surfaces & atmospheres) So for the moment, i'd be pretty happy to be able to start off with just drawing a wireframed sphere(s). ps: I don't want to use external libraries like Java3D, JOGL or aftermarket engines like JMonkeyEngine, Would rather keep it as straight Java.

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • really weird DNS problem in Ubuntu {after one month, seems like ISP problem}

    - by OmniWired
    Hello everyone. I been having this random dns problem, in Ubuntu 10.04 and in 10.10 it started about 2 weeks ago after (I believe) an update. Basically when I go to a website randomly I get that the website I'm visiting is not available ("Oops! Google Chrome could not connect to ..." & "This webpage is not available."). I tested with Chromium "7.0.515.0 (58587)" and Firefox minefield (4.0ish) and 3.6.9. I did these 4 things already: /etc/default/grub GRUB_CMDLINE_LINUX="ipv6.disable=1" and this: /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 *disabling Chromium DNS pre-fetching *using Google and OpenDNS servers as well as ISP DNS servers. But didn't improve, also no other computers in my network have the same problem. All computer wired to the same router. I'm a software engineer that run out of ideas, please help me. Thanks in advance. UPDATE: some programs (synaptic / firefox update/ vuze(azureus)) say connection refused for the error. Most of the time a second try will fix the "refusal". UPDATE2: I found out with Wireshark, that everytime I have this problem i've got this 192.168.0.10 8.8.8.8 ICMP Destination unreachable (Port unreachable) Confirmed an ISP error. ISP;Speedy Location: Argentina, Buenos Aires (capital Federal) Area.

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

  • XNA 2D Spritesheet drawing rendering problem

    - by user24092
    I'm making a tile-based game, using one spritesheet containing all tile graphics. Each tile has a size of 32x32 pixels. The main problem is: when I draw the tile to the screen, if the tile position x and y are not rounded or if scale is activated in spriteBatch.Draw() method (scale != 1.0f), I get some lines of adjacent tiles on the spritesheet into the current tile drawed. I already tried setting SamplerState to PointClamp, removing AntiAlias, but still doesn't work. Here I'll show images of some tests that I made, with a test sprite sheet that I've created (I made a 9x9 spritesheet, with each sprite of size 32x32 containing a unique solid color). Tests: http://img6.imageshack.us/img6/5946/testsqj.png SpriteSheet used: http://imageshack.us/a/img821/1341/tilesm.png Already tried to remove anti-alias, set PointClamp as sampler state, but still getting this issue, XNA keeps drawing part of the adjacent pixels of the texture on the screen. What I want is to get the correct area of the tilesheet texture (as seen in the first test, that gets just the yellow pixels). My question is: Is there any way that I can fix this, WITHOUT adding tile spacing or any other modification involving the tilesheet? Maybe disabling a texture filtering that is done by XNA, or something like that.

    Read the article

  • GRUB Problem or Hardware Problem?

    - by Nikit Batale
    I have Windows XP Professional and Ubuntu Karmic Koala 9.10 on dual boot. I often get a GRUB loading failed error during boot-up. Or sometimes, when I try to boot into either Ubuntu or XP, I get a Could not read from source error. Now this is funny. After getting one of the above errors, if I simply unplug the SATA cable from Hard Drive end and plug it again, the problem gets solved. So what and whose problem is it? GRUB or My Hardware ? P.S.: I was suggested on StackOverflow that ServerFault is the right place for this question.

    Read the article

  • Rendering a view to a string in MVC, then redirecting -- workarounds?

    - by James S
    Hi -- I can't render a view to a string and then redirect, despite this answer from Feb (after version 1.0, I think) that claims it's possible. I thought I was doing something wrong, and then I read this answer from Haack in July that claims it's not possible. If somebody has it working and can help me get it working, that's great (and I'll post code, errors). However, I'm now at the point of needing workarounds. There are a few, but nothing ideal. Has anybody solved this, or have any comments on my ideas? This is to render email. While I can surely send the email outside of the web request (store info in a db and get it later), there are many types of emails and I don't want to store the template data (user object, a few other LINQ objects) in a db to let it get rendered later. I could create a simpler, serializable POCO and save that in the db, but why? ... I just want rendered text! I can create a new RedirectToAction object that checks if the headers have been sent (can't figure out how to do this -- try/catch?) and, if so, builds out a simple page with a meta redirect, a javascript redirect, and also a "click here" link. Within my controller, I can remember if I've rendered an email and, if so, manually do #2 by displaying a view. I can manually send the redirect headers before any potential email rendering. Then, rather than using the MVC infrastructure to redirecttoaction, I just call result.end. This seems easiest, but really messy. Anything else? EDIT: I've tried Dan's code (very similar to the code from Jan/Feb that I've already tried) and I'm still getting the same error. The only substantial difference I can see is that his example uses a view while I use a partial view. I'll try testing this later with a view. Here's what I've got: Controller public ActionResult Certifications(string email_intro) { //a lot of stuff ViewData["users"] = users; if (isPost()) { //create the viewmodel var view_model = new ViewModels.Emails.Certifications.Open(userContext) { emailIntro = email_intro }; //i've tried stopping this after just one iteration, in case the problem is due to calling it multiple times foreach (var user in users) { if (user.Email_Address.IsValidEmailAddress()) { //add more stuff to the view model specific to this user view_model.user = user; view_model.certification302Summary.subProcessesOwner = new SubProcess_Certifications(RecordUpdating.Role.Owner, null, null, user.User_ID, repository); //more here.... //if i comment out the next line, everything works ok SendEmail(view_model, this.ControllerContext); } } return RedirectToAction("Certifications"); } return View(); } SendEmail() public static void SendEmail(ViewModels.Emails.Certifications.Open model, ControllerContext context) { var vd = context.Controller.ViewData; vd["model"] = model; var renderer = new CustomRenderers(); //i fixed an error in your code here var text = renderer.RenderViewToString3(context, "~/Views/Emails/Certifications/Open.ascx", "", vd, null); var a = text; } CustomRenderers public class CustomRenderers { public virtual string RenderViewToString3(ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { //copy/paste of dan's code } } Error [HttpException (0x80004005): Cannot redirect after HTTP headers have been sent.] System.Web.HttpResponse.Redirect(String url, Boolean endResponse) +8707691 Thanks, James

    Read the article

  • How do you avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? A related follow-up question might be, "How do you ensure that you are solving the right problem?" Or "When is it most productive to think and design more vs. code some experiments to and figure out the design later?" EDIT: One close vote already, but I'm not sure why. I wrote this in the first person, but I doubt I'm the only programmer to ever choke in an interview. Here is a list of techniques for taking a math test and another for taking an oral exam. Maybe I'm not expressing myself well, but I'm asking if there is a similar list of techniques for handling a programming problem under pressure?

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • How to avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? When is it most productive to think and design more vs. code some experiments to and figure out the design later? Here is a list of techniques for taking a math test and another for taking an oral exam. Is there is a similar list of techniques for handling a programming problem under pressure? ANSWERS: I think this is a valid answer: How To Solve It. I found the link as an answer to Steps to solve or approach towards a solution. There were also some really good tips at Is thinking out loud during an interview really the best strategy?. A great and concise argument for TDD is the first answer to TDD Writing code vs Figuring out the answer to a problem?. My question may be a near-duplicate of that one.

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Abstracting entity caching in XNA

    - by Grofit
    I am in a situation where I am writing a framework in XNA and there will be quite a lot of static (ish) content which wont render that often. Now I am trying to take the same sort of approach I would use when doing non game development, where I don't even think about caching until I have finished my application and realise there is a performance problem and then implement a layer of caching over whatever needs it, but wrap it up so nothing is aware its happening. However in XNA the way we would usually cache would be drawing our objects to a texture and invalidating after a change occurs. So if you assume an interface like so: public interface IGameComponent { void Update(TimeSpan elapsedTime); void Render(GraphicsDevice graphicsDevice); } public class ContainerComponent : IGameComponent { public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it } public void Render(GraphicsDevice graphicsDevice) { foreach(var component in ChildComponents) { // draw every component } } } Then I was under the assumption that we just draw everything directly to the screen, then when performance becomes an issue we just add a new implementation of the above like so: public class CacheableContainerComponent : IGameComponent { private Texture2D cachedOutput; private bool hasChanged; public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it // set hasChanged to true if required } public void Render(GraphicsDevice graphicsDevice) { if(hasChanged) { CacheComponents(graphicsDevice); } // Draw cached output } private void CacheComponents(GraphicsDevice graphicsDevice) { // Clean up existing cache if needed var cachedOutput = new RenderTarget2D(...); graphicsDevice.SetRenderTarget(renderTarget); foreach(var component in ChildComponents) { // draw every component } graphicsDevice.SetRenderTarget(null); } } Now in this example you could inherit, but your Update may become a bit tricky then without changing your base class to alert you if you had changed, but it is up to each scenario to choose if its inheritance/implementation or composition. Also the above implementation will re-cache within the rendering cycle, which may cause performance stutters but its just an example of the scenario... Ignoring those facts as you can see that in this example you could use a cache-able component or a non cache-able one, the rest of the framework needs not know. The problem here is that if lets say this component is drawn mid way through the game rendering, other items will already be within the default drawing buffer, so me doing this would discard them, unless I set it to be persisted, which I hear is a big no no on the Xbox. So is there a way to have my cake and eat it here? One simple solution to this is make an ICacheable interface which exposes a cache method, but then to make any use of this interface you would need the rest of the framework to be cache aware, and check if it can cache, and to then do so. Which then means you are polluting and changing your main implementations to account for and deal with this cache... I am also employing Dependency Injection for alot of high level components so these new cache-able objects would be spat out from that, meaning no where in the actual game would they know they are caching... if that makes sense. Just incase anyone asked how I expected to keep it cache aware when I would need to new up a cachable entity.

    Read the article

  • Flowchart for solving programming problems

    - by nurne
    I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: Google "how to..." or "... tutorial". Search the java/msdn/apple/etc API doc for the specific class or method. Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. Take a nap and let the subconscious work. Debug. Draw the algorithm or system. Google the logged error message. Ask a colleague or manager. Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • Rendering CV template with XeLaTex

    - by jacob
    Installed kubuntu on thursday Installed LaTeX on my kubuntu machine, using full Compiled an old document and it worked fine Downloaded a CV template from http://www.latextemplates.com/template/two-column-one-page-cv Compiled it, got error Fatal fontspec error: "cannot-use-pdftex" The fontspec package requires either XeTeX or LuaTeX to function. You must change your typesetting engine to, e.g., "xelatex" or "lualatex" instead of plain "latex" or "pdflatex". See the fontspec documentation for further information. For immediate help type H . Installed XeLaTex using this guide http://ledgersmb.org/faq/xelatex i.e. 7 Installed texlive-xetex that includes xelatex apt-get install texlive-xetex apt-get install liblatex-{driver,encode,table}-perl apt-get install libtemplate-plugin-latex-per 8) Compiled CV template again, did not work. Related: No Xelatex in texlive 2012 Excuse me if my question is not clear enough, I'm new to linux.

    Read the article

  • Tool to identify Internet Explorer rendering differences with css

    - by Bakaburg
    I develop website using chrome and mac os as development environment. Since my audience is pretty specific I don't feel the necessity for too much backward compatibility with IE8 and less. However to my great dismay, even IE9 looks totally broken... I would like to know if there's on the web a tool that could tell me what probably went wrong with IE, that is a webapp that parse the rendered css and check which rules are probably totally broken in IE9.

    Read the article

  • What's the standard location of a 3D clipping box?

    - by Kendall Frey
    The way I understand 3D rendering, polygons are transformed using several matrices, and they are then clipped if they are not inside a certain box, before projecting the box onto the screen. Before transformation, the visible area is typically a frustum, and after transformation, I am guessing it's a cube. This cube makes the clipping math easier than a frustum would. My question is, what's the 'standard' location/size for this clipping box? I can think of 3 possibilities: (0,0,0)-(1,1,1), (-0.5,-0.5,-0.5)-(0.5,0.5,0.5), (-1,-1,-1)-(1,1,1) Or is there no standard?

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >