Search Results

Search found 3366 results on 135 pages for 'deferred rendering'.

Page 3/135 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Deferred vs Immediate execution in Linq

    - by Jalpesh P. Vadgama
    In this post, We are going to learn about Deferred vs Immediate execution in Linq.  There an interesting variations how Linq operators executes and in this post we are going to learn both Deferred execution and immediate execution. What is Deferred Execution? In the Deferred execution query will be executed and evaluated at the time of query variables usage. Let’s take an example to understand Deferred Execution better. Example: Following is a code for that. using System; using System.Collections.Generic; using System.Linq; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var customers = new List<Customer>( new[] { new Customer{FirstName = "Jalpesh",LastName = "Vadgama"}, new Customer{FirstName = "Vishal",LastName = "Vadgama"}, new Customer{FirstName = "Tushar",LastName = "Maru"} } ); var newCustomers = customers.Where(c => c.LastName == "Vadgama"); customers.Add(new Customer {FirstName = "Teerth", LastName = "Vadgama"}); foreach (var c in newCustomers) { Console.WriteLine(c.FirstName); } } public class Customer { public string FirstName { get; set; } public string LastName { get; set; } } } } More on my personal blog @www.dotnetjalps.com

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • Smooth terrain rendering

    - by __dominic
    I'm trying to render a smooth terrain with Direct3D. I've got a 50*50 grid with all y values = 0, and a set of 3D points that indicate the location on the grid and depth or height of the "valley" or "hill". I need to make the y values of the grid vertices higher or lower depending on how close they are to each 3D point. Thus, in the end I should have a smooth terrain renderer. I'm not sure at all what way I can do this. I've tried changing the height of the vertices based on the distance to each point just using this basic formula: dist = a² + b² + c² where a, b and c are the x, y, and z distance from a vertex to a 3D point. The result I get with this is not smooth at all. I'm thinking there is probably a better way. Here is a screenshot of what I've got for the moment: https://dl.dropbox.com/u/2562049/terrain.jpg

    Read the article

  • Libgdx - 2D Mesh rendering overlap glitch

    - by user46858
    I am trying to render a 2D circle segment mesh (quarter circle)using Libgdx/Opengl ES 2.0 but I seem to be getting an overlapping issue as seen in the picture attached. I cant seem to find the cause of the problem but the overlapping disappears/reappears if I drag and resize the window to random sizes. The problem occurs on both pc and android. The strange thing is the first two segments atleast dont seem to be causing any overlapping only the third and/or forth segment.......even though they are all rendered using the same mesh object..... I have spent ages trying to find the cause of the problem before posting here for help so ANY help/advice in finding the cause of this problem would be really appreciated. public class MyGdxGame extends Game { private SpriteBatch batch; private Texture texture; private OrthographicCamera myCamera; private float w; private float h; private ShaderProgram circleSegShader; private Mesh circleScaleSegMesh; private Stage stage; private float TotalSegments; Vector3 virtualres; @Override public void create() { w = Gdx.graphics.getWidth(); h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); ViewPortsize = new Vector2(); TotalSegments = 4.0f; virtualres = new Vector3(1280.0f, 720.0f, 0.0f); myCamera = new OrthographicCamera(); myCamera.setToOrtho(false, w, h); texture = new Texture(Gdx.files.internal("data/libgdx.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); circleScaleSegMesh = createCircleMesh_V3(0.0f,0.0f,200.0f, 30.0f,3, (360.0f /TotalSegments) ); circleSegShader = loadShaderFromFile(new String("circleseg.vert"), new String("circleseg.frag")); shaderProgram.pedantic = false; stage = new Stage(); stage.setViewport(new ExtendViewport(w, h)); Gdx.input.setInputProcessor(stage); } @Override public void render() { .... //render renderInit(); renderCircleScaledSegment(); } @Override public void resize(int width, int height) { stage.getViewport().update(width, height, true); myCamera.position.set( virtualres.x/2.0f, virtualres.y/2.0f, 0.0f); myCamera.update(); } public void renderInit(){ Gdx.gl20.glClearColor(1.0f, 1.0f, 1.0f, 0.0f); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); batch.setShader(null); batch.setProjectionMatrix(myCamera.combined); } public void renderCircleScaledSegment(){ Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); Gdx.gl20.glEnable(GL20.GL_BLEND); batch.begin(); circleSegShader.begin(); Matrix4 modelMatrix = new Matrix4(); Matrix4 cameraMatrix = new Matrix4(); Matrix4 cameraMatrix2 = new Matrix4(); Matrix4 cameraMatrix3 = new Matrix4(); Matrix4 cameraMatrix4 = new Matrix4(); cameraMatrix = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f)).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix.mul(modelMatrix); cameraMatrix2 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(360.0f /TotalSegments) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix2.mul(modelMatrix); cameraMatrix3 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(2*(360.0f /TotalSegments))).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix3.mul(modelMatrix); cameraMatrix4 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f),0.0f - ((360.0f /TotalSegments)/ 2.0f) +(3*(360.0f /TotalSegments)) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix4.mul(modelMatrix); Vector3 box2dpos = new Vector3(0.0f, 0.0f, 0.0f); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix2); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix3); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix4); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.end(); batch.flush(); batch.end(); Gdx.gl20.glDisable(GL20.GL_DEPTH_TEST); Gdx.gl20.glDisable(GL20.GL_BLEND); } public Mesh createCircleMesh_V3(float cx, float cy, float r_out, float r_in, int num_segments, float segmentSizeDegrees){ float theta = (float) (2.0f * MathUtils.PI / (num_segments * (360.0f / segmentSizeDegrees))); float c = MathUtils.cos(theta);//precalculate the sine and cosine float s = MathUtils.sin(theta); float t,t2; float x = r_out;//we start at angle = 0 float y = 0; float x2 = r_in;//we start at angle = 0 float y2 = 0; float[] meshCoords = new float[num_segments *2 *3 *7]; int arrayIndex = 0; //array for triangles without indices for(int ii = 0; ii < num_segments; ii++) { meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t = x; x = c * x - s * y; y = s * t + c * y; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t2 = x2; x2 = c * x2 - s * y2; y2 = s * t2 + c * y2; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; } Mesh myMesh = new Mesh(VertexDataType.VertexArray, false, meshCoords.length, 0, new VertexAttribute(VertexAttributes.Usage.Position, 3, "a_position"), new VertexAttribute(VertexAttributes.Usage.Color, 4, "a_color")); myMesh.setVertices(meshCoords); return myMesh; } }

    Read the article

  • Rendering an image from an embedded Web Browser (C# WPF application)

    - by The Official Microsoft IIS Site
    How is all started So this week I was working on an extension for WebMatrix , Luke Sampson of http://StudioStyle.es just integrate a cool piece of code from Matt MCElheny . The news is that the studiostyle.es website now supports converting the over 1,000 themes uploaded for Visual Studio 2010 into the WebMatrix format, and hence we automatically got a very large load of themes to choose from. Still we aspired for an even better experience, currently the WebMatrix user will have to install the ColorThemeEditor...(read more)

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • Realtime rendering using a ray tracing engine

    - by Keyhan Asghari
    I want to render an object that has a mesh with one million hexagonal elements(100 * 100 * 100). Lights, shadows and textures is not important and each element has a solid color. and finally, the actions I want to have, is simply rotating the object, zooming and panning. I am wondering what ray tracing engine is better for my conditions. or, do I have to take another approach? any help will be appreciated.

    Read the article

  • XNA Rendering vertices that only appear within the cameras view

    - by user1157885
    I'm making a game in XNA and I recall hearing that professionally made games use a technique to only render the polygons that appear within the cameras projection. I've been trying to find something on this to do something similar in my game, could anyone point me in the right direction? Right now all I have is a plane/grid of vertices that you can set the X/Y on which is drawn using DrawUserIndexedPrimitives, but I plan to make a bunch of props as scenery items and I can imagine myself running into issues later on if I don't address this now. Thanks

    Read the article

  • Rendering only a part of the screen in high detail

    - by Bart van Heukelom
    If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes, you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today?

    Read the article

  • Map rendering Libgdx Java

    - by user3165683
    Ok, so I am trying to create a 2D non-movable random tiled map. This is what I have so far: private void generateTile(){ System.out.print("tiletry1"); while(loadedTiles != 8100){ System.out.print("tiletry"); Texture currentTile = null; int tileX = 0; int tileY = 0; if (tileX == 120); tileY = 16; tileX = 0; game.batch.begin(); switch(MathUtils.random(2)){ case 0: //game.batch.draw(tile1, tileX, tileY); System.out.print("tile1"); currentTile = tile1; break; case 1: //game.batch.draw(tile2, tileX, tileY); System.out.print("tile2"); currentTile = tile2; break; case 2: //game.batch.draw(tile3, tileX, tileY); System.out.print("tile3"); currentTile = tile3; break; } tileX+=16; loadedTiles ++; game.batch.draw(currentTile, tileX, tileY); game.batch.end(); } } However, I can't see any of the tiles and the screen just looks green. This method is above my render method which I have: camera.update(); batch.setProjectionMatrix(camera.combined); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); game.batch.begin(); //other render stuff Why am I not able to see the tiles?

    Read the article

  • Entity System and rendering

    - by hayer
    Okey, what I know so far; The entity contains a component(data-storage) which holds information like; - Texture/sprite - Shader - etc And then I have a renderer system which draws all this. But what I don't understand is how the renderer should be designed. Should I have one component for each "visual type". One component without shader, one with shader, etc? Just need some input on whats the "correct way" to do this. Tips and pitfalls to watch out for.

    Read the article

  • jQuery - deferred versus promise

    - by fletchsod
    What is the difference between Deferred and Promise other than the jQuery versions? What should I use for my need? I only want to call the fooExecute(). I only need the fooStart() and fooEnd() to toggle the html div status for example. //I'm using jQuery v2.0.0 function fooStart() { /* Start Notification */ } function fooEnd() { /* End Notification */ } function fooExecute() { /* Execute the scripts */ } $('#button1').on('click', function() { var deferred1 = $.Deferred(); var promise1 = $.Promise(); deferred1.??? promise1.??? });

    Read the article

  • Rendering Machine Configuration [closed]

    - by user1184598
    I would like to buy a computer for Rendering purpose. The Rendering is based on DirectX in Windows 7. It needed to support multi-thread rendering which is highly demanding (around 20 threads doing rendering and connection to MS SQL Server job simultaneously). Could I have some suggestion on the configuration of the computer? e.g. what kind of CPU, GPU, Mother Board should be used? The budget is around US$1000. Thank you very much.

    Read the article

  • How can I support the creation and rendering of both interior and exterior environments?

    - by Nick
    Say I already have a renderer that can support outdoor terrain and environment rendering. How would I go about adding support for interior environments (for example, like World of Warcraft's dungeons) to my game? I'm interested both in how I should fit the interiors into my content creation process (for example, I thought about leaving holes in the terrain mesh into which I can "paste" the interior dungeon mesh at runtime) and how to render them (it seems like I'd want a different rendering flow other than a blended texture rendering phase that terrain uses).

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

  • What are some of the more commonly used projectile rendering techniques?

    - by KlashnikovKid
    couldn't find a duplicate question (bit surprising to me) but anywho I'm starting to get near implementing the rendering of projectiles for my game. My question is what are some good techniques for efficiently rendering projectiles? I would like emphasis on techniques that leave room for the projectiles to be "rich" and dynamic (Cool to look at!) I'm also using DX11 for my rendering engine so bleeding edge techniques that can make use of that would be much appreciated too. Thanks!

    Read the article

  • Lazy/deferred loading of a CollectionViewSource?

    - by Shimmy
    When you create a CollectionViewSource in the Resources section, is the set Source loaded when the resources are initalized (i.e. when the Resources holder is inited) or when data is bound? Is there a xamly way to make a CollectionViewSource lazy-load? deferred-load? explicit-load?

    Read the article

  • Postfix - suspend domain from which deferred status was received?

    - by Al Bundy
    Is there a possibility to make Postfix stop trying (for a period of time) to send emails to a domain from which it received a deferred response? Currently my Postfix goes through each address in the queue. Please see the below example. At 09:48:32 the status=deferred appears. After this Postfix should stop trying to send stuff to the yahoo.com domain. Jun 6 09:48:20 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37163, delays=36519/638/1.2/4.9, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:20 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37163, delays=36519/638/1.2/4.9, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:20 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37163, delays=36519/638/1.2/4.9, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:30 mailer postfix/smtp[8643]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[63.250.192.46]:25, delay=37173, delays=36519/645/1.4/7.4, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:30 mailer postfix/smtp[8643]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[63.250.192.46]:25, delay=37173, delays=36519/645/1.4/7.4, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:30 mailer postfix/smtp[8643]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[63.250.192.46]:25, delay=37173, delays=36519/645/1.4/7.4, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:30 mailer postfix/smtp[8643]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[63.250.192.46]:25, delay=37173, delays=36519/645/1.4/7.4, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:30 mailer postfix/smtp[8643]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[63.250.192.46]:25, delay=37173, delays=36519/645/1.4/7.4, dsn=2.0.0, status=sent (250 ok dirdel 5/0) Jun 6 09:48:32 mailer postfix/smtp[8644]: C779A233C0: host mta6.am0.yahoodns.net[98.138.112.38] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command) Jun 6 09:48:32 mailer postfix/smtp[8644]: C779A233C0: lost connection with mta6.am0.yahoodns.net[98.138.112.38] while sending RCPT TO Jun 6 09:48:33 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37176, delays=36519/655/2.5/0.18, dsn=4.7.0, status=deferred (host mta7.am0.yahoodns.net[98.138.112.35] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command)) Jun 6 09:48:33 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37176, delays=36519/655/2.5/0.18, dsn=4.7.0, status=deferred (host mta7.am0.yahoodns.net[98.138.112.35] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command)) Jun 6 09:48:34 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37176, delays=36519/655/2.5/0.18, dsn=4.7.0, status=deferred (host mta7.am0.yahoodns.net[98.138.112.35] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command)) Jun 6 09:48:34 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37176, delays=36519/655/2.5/0.18, dsn=4.7.0, status=deferred (host mta7.am0.yahoodns.net[98.138.112.35] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command)) Jun 6 09:48:34 mailer postfix/smtp[8644]: C779A233C0: to=<[email protected]>, relay=mta7.am0.yahoodns.net[98.138.112.35]:25, delay=37176, delays=36519/655/2.5/0.18, dsn=4.7.0, status=deferred (host mta7.am0.yahoodns.net[98.138.112.35] said: 421 4.7.0 [TS01] Messages from x.x.x.250 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html (in reply to MAIL FROM command)) Jun 6 09:48:34 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37177, delays=36519/658/0/0.07, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:34 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37177, delays=36519/658/0/0.18, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:34 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37177, delays=36519/658/0/0.35, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:34 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37177, delays=36519/658/0/0.4, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:34 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37177, delays=36519/658/0/0.46, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:35 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37179, delays=36519/660/0/0.16, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:35 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37179, delays=36519/660/0/0.22, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:36 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37179, delays=36519/660/0/0.31, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO) Jun 6 09:48:36 mailer postfix/error[8661]: C779A233C0: to=<[email protected]>, relay=none, delay=37179, delays=36519/660/0/0.36, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.138.112.35] while sending RCPT TO)

    Read the article

  • Google Web Toolkit Deferred Binding Issue

    - by snctln
    I developed a web app using GWT about 2 years ago, since then the application has evolved. In its current state it relies on fetching a single XML file and parsing the information from it. Overall this works great. A requirement of this app is that it needs to be able to be ran from the filesystem (file:///..) as well as the traditional model of running from a webserver (http://...) Fetching this file from a webserver works exactly as expected using a RequestBuilder object. When running the app from the filesystem Firefox, Opera, Safari, and Chrome all behave as expected. When running the app from the filesystem using IE7 or IE8 the RequestBuilder.send() call fails, the information about the error suggests that there is a problem accessing the file due to violating the same origin policy. The app worked as expected in IE6 but not in IE7 or IE8. So I looked at the source code of RequestBuilder.java and saw that the actual request was being executed with an XMLHttpRequest GWT object. So I looked at the source code for XMLHttpRequest.java and found out some information. Here is the code (starts at line 83 in XMLHttpRequest.java) public static native XMLHttpRequest create() /*-{ if ($wnd.XMLHttpRequest) { return new XMLHttpRequest(); } else { try { return new ActiveXObject('MSXML2.XMLHTTP.3.0'); } catch (e) { return new ActiveXObject("Microsoft.XMLHTTP"); } } }-*/; So basically if an XMLHttpRequest cannot be created (like in IE6 because it is not available) an ActiveXObject is used instead. I read up a little bit more on the IE implementation of XMLHttpRequest, and it appears that it is only supported for interacting with files on a webserver. I found a setting in IE8 (Tools-Internet Options-Advanced-Security-Enable native XMLHTTP support), when I uncheck this box my app works. I assume this is because I am more of less telling IE to not use their implementation of XmlHttpRequest, so GWT just uses an ActiveXObject because it doesn't think the native XmlHttpRequest is available. This fixes the problem, but is hardly a long term solution. I can currently catch a failed send request and verify that it was trying to fetch the XML file from the filesystem using normal GWT. What I would like to do in this case is catch the IE7 and IE8 case and have them use a ActiveXObject instead of a native XmlHttpRequest object. There was a posting on the GWT google group that had a supposed solution for this problem (link). Looking at it I can tell that it was created for an older version of GWT. I am using the latest release and think that this is more or less what I would like to do (use GWT deferred binding to detect a specific browser type and run my own implementation of XMLHttpRequest.java in place of the built in GWT implementation). Here is the code that I am trying to use package com.mycompany.myapp.client; import com.google.gwt.xhr.client.XMLHttpRequest; public class XMLHttpRequestIE7or8 extends XMLHttpRequest { // commented out the "override" so that eclipse and the ant build script don't throw errors //@Override public static native XMLHttpRequest create() /*-{ try { return new ActiveXObject('MSXML2.XMLHTTP.3.0'); } catch (e) { return new ActiveXObject("Microsoft.XMLHTTP"); } }-*/; // have an empty protected constructor so the ant build script doesn't throw errors // the actual XMLHttpRequest constructor is empty as well so this shouldn't cause any problems protected XMLHttpRequestIE7or8() { } }; And here are the lines that I added to my module xml <replace-with class="com.mycompany.myapp.client.XMLHttpRequestIE7or8"> <when-type-is class="com.google.gwt.xhr.client.XMLHttpRequest"/> <any> <when-property-is name="user.agent" value="ie7" /> <when-property-is name="user.agent" value="ie8" /> </any> </replace-with> From what I can tell this should work, but my code never runs. Does anyone have any idea of what I am doing wrong? Should I not do this via deferred binding and just use native javascript when I catch the fail case instead? Is there a different way of approaching this problem that I have not mentioned? All replies are welcome.

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >