Search Results

Search found 751 results on 31 pages for 'quad'.

Page 5/31 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • GLSL Bokeh using Quads and Textures

    - by Notoriousaur
    I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel: See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks

    Read the article

  • IPSec Offload support in 82576GB controller for Linux

    - by Rodrigo Leal
    Due to migration of servers to cloud computing, we bought several NICs that support mechanisms like SRIOV and VMDQ. Furthermore, as security risk was also a concern and we did not want to create more overhead on the processor, IPSec Offload support was essential. The model chosen was: Intel Gigabit ET2 Quad Port Svr Adptr. (With 82576GB controller): http://ark.intel.com/products/49187/intel-gigabit-et2-quad-port-server-adapter However, we were unable to configure IPSec Offload on Linux. We tried to test on another server we have, a Windows Server 2012 R2, but again without success. It seems that the driver for this controller is not available for windows server 2012 R2, and Linux. The test on windows would be only for verification purposes, we will not use this platform. Could someone confirm this lack of support for Linux?

    Read the article

  • 64 bit Ubuntu sees half my RAM

    - by koehn
    This is on my AMD FX(tm)-4100 Quad-Core Processor (according to /proc/cpuinfo) on a machine with two 4GB RAM DIMMs. BIOS shows 8GB RAM installed. Any help would be appreciated. RAM: Extreme Performance Sector 5 G Series 8GB DDR3-1333 (PC3-1066) Enhanced Latency Dual Channel Desktop Memory Kit (Two 4GB Memory Modules) MB: GA-78LMT-S2P Socket AM3+ 760G mATX AMD Motherboard CPU: FX 4100 Black Edition 3.6GHz Quad-Core Socket AM3+ Boxed Processor Here's what the software says: $ free total used free shared buffers cached Mem: 3515100 3293656 221444 0 19260 2670352 -/+ buffers/cache: 604044 2911056 Swap: 3650556 90916 3559640 $ uname -a Linux mythbuntu 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux From lshw: *-memory description: System Memory physical id: 20 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM 1066 MHz (0.9 ns) product: None vendor: None physical id: 0 serial: None slot: A0 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns) *-bank:1 description: DIMM 1066 MHz (0.9 ns) product: None vendor: None physical id: 1 serial: None slot: A1 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns)

    Read the article

  • OpenGL ES 2. How do I Create a Basic Fading Streak Effect?

    - by dugla
    For the iPad app I am writing using OpenGL ES 2 I have a single quad - shaded using GLSL - that is dragged around the screen. Very basic. This works fine. But is rather boring. I want to increase the coolness a bit in the following way: when the user drags the quad it leaves a streak behind that fades over time. Continuous dragging would be a bit like a streaking comet across the night sky. What is the simplest way to implement this? Thanks.

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • FBO rendering different result between Glaxay S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • What are the semantics of glRotate and glTranslate's parameters?

    - by Zarkopafilis
    I have been trying to play with OpenGL after watching some tutorials and I don't understand how the glTranslatef and glRotatef functions work. I believe a simple picture would help me. I understand that glTranslatef changes the position of the "camera" (but does it change the position in wich the shapes are getting drawn)? However, I don't understand the rotation concept at all. If I do glRotatef(1,0,0,1) it makes my quad spin around. If I just do glRotatef(1,0,0,0) it makes the quad smaller (further away) but if I try to rotate around the X or Y axis, I get a black screen. I don't understand the angle either. Help would be appreciated.

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • Task Manager: CPU usage history

    - by Nezdet
    I bougth recently a server with 2 x X5550, they are quad (4 cores each) total 8 cores If I check the task manager it shows in the CPU usage history 16 diagrams, Should't it be 8 cause I have 2 processors with quad? or the diagrams maybee shows the Threads of the CPU?

    Read the article

  • The most efficient method of drawing multiple quads in OpenGL

    - by CPatton
    I'm not very keen with OpenGL and I was wondering if someone could give me some insight on this. I'm a 'seasoned' programmer, I've read the redbook about VBOs and the like, but I was wondering from a more experienced person about the best/most efficient way of achieving this. I've been producing this 2d tile-based game engine to be used in several projects. I have a class called "ScreenObject" which is mainly composed of a Dictionary<Point, Tile> The Point key is to show where to render the Tile on the screen, and the Tile contains one or more textures to be drawn at that point. This ScreenObject is where the tiles will be modified, deleted, added, etc.. My original method of drawing the tiles in the testing I've done was to iterate through the ScreenObject and draw each quad at each location separately. From what I've read, this is a massive waste of resources. It wasn't horribly slow in the testing, but after I've completed the animation classes and effect classes, I'm sure it would be extremely slow. And one last thing, if you wouldn't mind.. As I said before, the Tile class can contain multiple textures to be drawn at the Point location on the screen. I recognize possibly two options for me here. Either add a quad at that location for each texture to be drawn, or, somehow.. use a multiple texture for the same quad (if it's possible). Even if each tile contained one texture only, that would be 64 quads to be drawn on the screen. Most of the tiles will contain 2-5 textures, so the number of total quads would increase dramatically with this method. Would it be feasible to add a quad for each new texture, or am I ignoring a better way to do this? Just need some help understanding this if you don't mind :) I've tried to be as concise as possible, and I'd greatly appreciate any responses.. and even some criticism. Programming is often a learning process and one who develops seems to never stops learning. Thanks for your time.

    Read the article

  • How to display two-rows bracket in Latex?

    - by niko
    Hi, Does anyone know how to modify the following string in order to display the two-lines bracket? str = '$$c_i =\{\begin{array}{l l} 1 \quad L\left(Q_i\right) < 0 \\ 0 \quad L\left(Q_i\right) \geq 0 \\ \end{array}$$'; The current output is the following: The sign '{' has to embrace both rows (1 and 0).

    Read the article

  • Android - OPENGL cube is NOT in the display

    - by Marc Ortiz
    I'm trying to display a square on my display and i can't. Whats my problem? How can I display it on the screen (center of the screen)? I let my code below! Here's my render class: public class GLRenderEx implements Renderer { private GLCube cube; Context c; GLCube quad; // ( NEW ) // Constructor public GLRenderEx(Context context) { // Set up the data-array buffers for these shapes ( NEW ) quad = new GLCube(); // ( NEW ) } // Call back when the surface is first created or re-created. @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { // NO CHANGE - SKIP } // Call back after onSurfaceCreated() or whenever the window's size changes. @Override public void onSurfaceChanged(GL10 gl, int width, int height) { // NO CHANGE - SKIP } // Call back to draw the current frame. @Override public void onDrawFrame(GL10 gl) { // Clear color and depth buffers using clear-values set earlier gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset model-view matrix ( NEW ) gl.glTranslatef(-1.5f, 0.0f, -6.0f); // Translate left and into the // screen ( NEW ) // Translate right, relative to the previous translation ( NEW ) gl.glTranslatef(3.0f, 0.0f, 0.0f); quad.draw(gl); // Draw quad ( NEW ) } } And here is my square class: public class GLCube { private FloatBuffer vertexBuffer; // Buffer for vertex-array private float[] vertices = { // Vertices for the square -1.0f, -1.0f, 0.0f, // 0. left-bottom 1.0f, -1.0f, 0.0f, // 1. right-bottom -1.0f, 1.0f, 0.0f, // 2. left-top 1.0f, 1.0f, 0.0f // 3. right-top }; // Constructor - Setup the vertex buffer public GLCube() { // Setup vertex array buffer. Vertices in float. A float has 4 bytes ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind } // Render the shape public void draw(GL10 gl) { // Enable vertex-array and define its buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the primitives from the vertex-array directly gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } Thanks!!

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • Texture errors in CubeMap

    - by shade4159
    I am trying to apply this texture as a cubemap. This is my result: Clearly I am doing something with my texture coordinates, but I cannot for the life of me figure out what. I don't even see a pattern to the texture fragments. They just seem like a jumble of different faces. Can anyone shed some light on this? Vertex shader: #version 400 in vec4 vPosition; in vec3 inTexCoord; smooth out vec3 texCoord; uniform mat4 projMatrix; void main() { texCoord = inTexCoord; gl_Position = projMatrix * vPosition; } My fragment shader: #version 400 smooth in vec3 texCoord; out vec4 fColor; uniform samplerCube textures void main() { fColor = texture(textures,texCoord); } Vertices of cube: point4 worldVerts[8] = { vec4( 15, 15, 15, 1 ), vec4( -15, 15, 15, 1 ), vec4( -15, 15, -15, 1 ), vec4( 15, 15, -15, 1 ), vec4( -15, -15, 15, 1 ), vec4( 15, -15, 15, 1 ), vec4( 15, -15, -15, 1 ), vec4( -15, -15, -15, 1 ) }; Cube rendering: void worldCube(point4* verts, int& Index, point4* points, vec3* texVerts) { quadInv( verts[0], verts[1], verts[2], verts[3], 1, Index, points, texVerts); quadInv( verts[6], verts[3], verts[2], verts[7], 2, Index, points, texVerts); quadInv( verts[4], verts[5], verts[6], verts[7], 3, Index, points, texVerts); quadInv( verts[4], verts[1], verts[0], verts[5], 4, Index, points, texVerts); quadInv( verts[5], verts[0], verts[3], verts[6], 5, Index, points, texVerts); quadInv( verts[4], verts[7], verts[2], verts[1], 6, Index, points, texVerts); } Backface function (since this is the inside of the cube): void quadInv( const point4& a, const point4& b, const point4& c, const point4& d , int& Index, point4* points, vec3* texVerts) { quad( a, d, c, b, Index, points, texVerts, a.to_3(), b.to_3(), c.to_3(), d.to_3()); } And the quad drawing function: void quad( const point4& a, const point4& b, const point4& c, const point4& d, int& Index, point4* points, vec3* texVerts, const vec3& tex_a, const vec3& tex_b, const vec3& tex_c, const vec3& tex_d) { texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_b.normalized(); points[Index] = b; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_d.normalized(); points[Index] = d; Index++; } Edit: I forgot to mention, in the image, the camera is pointed directly at the back face of the cube. You can kind of see the diagonals leading out of the corners, if you squint.

    Read the article

  • Z axis trouble with glTranslatef(...) - LWJGL

    - by Zarkopafilis
    Here is the code: private static boolean up = true , down = false , left = false , right = false, reset = false, in = false , out = false; public void start() { try { Display.setDisplayMode(new DisplayMode(800,600)); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GL11.glOrtho(0, 800, 0, 600, 0.00001f, 1000); GL11.glMatrixMode(GL11.GL_MODELVIEW); Keyboard.enableRepeatEvents(true); while (!Display.isCloseRequested()) { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); input(); if(up){ GL11.glTranslatef(0,0.1f,0); } if(down){ GL11.glTranslatef(0,-0.1f,0); } if(left){ GL11.glTranslatef(-0.1f,0,0); } if(right){ GL11.glTranslatef(0.1f,0,0); } if(in){ GL11.glTranslatef(0, 0, 1f); } if(out){ GL11.glTranslatef(0, 0, -1f); } if(reset){ GL11.glLoadIdentity(); } GL11.glBegin(GL11.GL_QUADS); GL11.glColor3f(255, 255, 255); GL11.glVertex3f(800/2, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 0); GL11.glVertex3f(800/2, 600/2 + 200, 0); GL11.glColor3f(0, 255, 0); GL11.glVertex3f(800/2, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 1); GL11.glVertex3f(800/2, 600/2 + 200, 1); GL11.glEnd(); Display.update(); } Display.destroy(); } public static void main(String[] argv){ new main().start(); } public void input(){ up = false; down = false; left = false; right = false; reset = false; in = false; out = false; if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)){ reset = true; } if(Keyboard.isKeyDown(Keyboard.KEY_W)){ up = true; } if(Keyboard.isKeyDown(Keyboard.KEY_S)){ down = true; } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ left = true; } if(Keyboard.isKeyDown(Keyboard.KEY_D)){ right = true; } if(Keyboard.isKeyDown(Keyboard.KEY_Q)){ in = true; } if(Keyboard.isKeyDown(Keyboard.KEY_E)){ out = true; } } As you can see I am creating 2 quads , a white one at z 0 and a green one at z 1. WASD keys function correctly. Also when I hit SPACEBAR the white quad is being shown. If I then press E , I can see the green quad. But if I press Q afterwards , I dont see the white one again!(Space Works everytime). Also if I render the green one at Z = -1 everything works perfectly BUT you may need up to 3 key presses Q/E to see the other quad. Why is that happening?

    Read the article

  • 2D vector value replacement using classes; genetic algorithm mutation

    - by gcolumbus
    I have a 2D vector as defined by the classes below. Note that I've used classes because I'm trying to program a genetic algorithm such that many, many 2D vectors will be created and they will all be different. class Quad: public std::vector<int> { public: Quad() : std::vector<int>(4,0) {} }; class QuadVec : public std::vector<Quad> { }; An important part of my algorithm, however, is that I need to be able to "mutate" (randomly change) particular values in a certain number of randomly chosen 2D vectors. This has me stumped. I can easily write code to randomly select the value within the 2D vector that will be selected for "mutation" but how do I actually enact that change using classes? I understand how this would be done with one 2D vector that has already been initialized but how do I do this if it hasn't? Please let me know if I haven't provided enough info or am not clear as I tend to do that. Thanks for your time and help!

    Read the article

  • Help me choose a desktop: which of these two should I buy?

    - by Sammy
    I just want the more powerful of the two: Choice 1: http://www.bestbuy.com/site/Gateway+-+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9698936.p?id=1218153428687&skuId=9698936 Choice 2: /site/HP+-+Pavilion+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9694506.p?id=1218150609828&skuId=9694506 I can't post more than one hyperlink since I am a new user, so please add bestbuy domain name before choice 2. The latter choice is a bit more expensive but not by much so I don't care about that. As for what I intend to use my machine for, just regular web surfing, light gaming, web development related work, etc. But that doesn't really matter, of these two I just want to know which is the better more powerful system and which you would buy if you were in my position.

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • How can I make OpenGL textures scale without becoming blurry?

    - by adorablepuppy
    I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object: public void render(){ Color.white.bind(); this.spriteSheet.bind(); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(getDrawingWidth(), this.y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(this.x, getDrawingHeight()); GL11.glEnd(); } And here is code from my initGL method in my game class GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(0.46f,0.46f,0.90f,1.0f); GL11.glViewport(0,0,width,height); GL11.glOrtho(0,width,height,0,1,-1); And here is the code that does the actual drawing public void start(){ initGL(800,600); init(); while(true){ GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); for(int i=0;i<entities.size();i++){ ((RenderableEntity)entities.get(i)).render(); } Display.update(); Display.sync(100); if(Display.isCloseRequested()){ Display.destroy(); System.exit(0); } } }

    Read the article

  • How to watch 3D on Acer "3D Ready" projector?

    - by glenneroo
    We have here a Acer P1200 DLP projector which is "3D Ready" (using the BrilliantColor™ DarkChip™ 3) and documentation was not included in the packaging. We don't have a Blu-ray player and have no intention of purchasing one in the near future so I'm looking for a way to view encoded or streamed content. My question is: How is it possible to watch 3D content? What extra hardware/software will I need? EDIT: Found this information in the user manual: DLP 3D function: Choose while using DLP 3D glasses, quad buffer (NVIDIA/ATI…) graphic card and HQFS format file or DVD with corresponding SW player. 3D Sync L/R: If you see a discrete or overlapping image while wearing DLP 3D glasses, you may need to execute "Invert" to get best match of left/ right image sequence to get the correct image (for DLP 3D). ...but I'm still at a loss. What is a quad buffer graphics card?

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Dual-headed graphics card choice: DVI & VGA with 512MB _or_ DVI & DVI with 256MB

    - by TimH
    Which would you choose? Some more detail: I can choose between: A dual-headed card with both heads DVI but only 256MB of memory A dual-headed card with one VGA and one DVI, with 512MB of memory. Both monitors are 1600x1200 I'll be doing mostly business app development on the computer. No gameplay or advanced graphics work. It's running Win7 and is a quad-core i5. I'm thinking of going with 256MB one, just so both displays are DVI and I don't have to shift between sharp & blurry when I look from one screen to the other. But I'm not sure if the additional RAM would be a huge boon for some reason (Win7 GPU acceleration, for example? But with a quad-core, who cares?).

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >