Search Results

Search found 11979 results on 480 pages for 'game of thrones'.

Page 273/480 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Best way to detect if vec3 is between vec3(x) and vec3(y) in glsl

    - by elect
    As titled I am sampling from a texture and if the color is somehow gray [vec3(.8), vec3(.9)] and an uniform is 1 I need to substitute that color with another one I am not a glsl veteran but I am pretty sure there is a more elegant and compact (without mentioning faster) way than this: vec3 textureColor = texture(texture0, oUV); if(settings.w == 1 && textureColor.r > .8 && textureColor.r < .9 && textureColor.g > .8 && textureColor.g < .9 && textureColor.b > .8 && textureColor.b < .9)

    Read the article

  • Workaround the flip queue (AKA pre-rendered frames) in OpenGL?

    - by user41500
    It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.

    Read the article

  • Moving player in direciton camera is facing

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. My problem is that wherever camera is facing, players forward is always the same direction. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". My camera script so far: using UnityEngine; using System.Collections; public class PlayerScript : MonoBehaviour { public float RotateSpeed = 150, MoveSpeed = 50; float DeltaTime; void Update() { DeltaTime = Time.deltaTime; transform.Rotate(0, Input.GetAxis("LeftX") * RotateSpeed * DeltaTime, 0); transform.Translate(0, 0, -Input.GetAxis("LeftY") * MoveSpeed * DeltaTime); } } public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 170, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Get coordinates of arraylist

    - by opiop65
    Here's my map class: public class map{ public static final int CLEAR = 0; public static final ArrayList<Integer> STONE = new ArrayList<Integer>(); public static final int GRASS = 2; public static final int DIRT = 3; public static final int WIDTH = 32; public static final int HEIGHT = 24; public static final int TILE_SIZE = 25; // static int[][] map = new int[WIDTH][HEIGHT]; ArrayList<ArrayList<Integer>> map = new ArrayList<ArrayList<Integer>>(WIDTH * HEIGHT); enum tiles { air, grass, stone, dirt } Image air, grass, stone, dirt; Random rand = new Random(); public Map() { /* default map */ /*for(int y = 0; y < WIDTH; y++){ map[y][y] = (rand.nextInt(2)); System.out.println(map[y][y]); }*/ /*for (int y = 18; y < HEIGHT; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = STONE; } } for (int y = 18; y < 19; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = GRASS; } } for (int y = 19; y < 20; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = DIRT; } }*/ for (int y = 0; y < HEIGHT; y++) { for(int x = 0; x < WIDTH; x++){ map.set(x * WIDTH + y, STONE); } } try { init(null, null); } catch (SlickException e) { e.printStackTrace(); } render(null, null, null); } public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { air = new Image("res/air.png"); grass = new Image("res/grass.png"); stone = new Image("res/stone.png"); dirt = new Image("res/dirt.png"); } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) { for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y)) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } } public static boolean blocked(float x, float y) { return map[(int) x][(int) y] == STONE; } public static Rectangle blockBounds(int x, int y) { return (new Rectangle(x, y, TILE_SIZE, TILE_SIZE)); } } Specifically I am looking at this: for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y).intValue()) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } How can I access the coordinates of my arraylist map and then draw the tiles to the screen? Thanks!

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • Typical collision detection

    - by marcg11
    I would like to know how is the typical collision detection of most games. For example, you control a character which can move in 2 dimensional directions (except up and down). Now lets asume he walks into a wall, most of the games depending on character angle and the BB normal face will only stop the player in one axis, but will continue moving in the other along the wall axis. How is that done? I've only managed to stop the character from going through the wall by seting the position to the last one in the past frame if the new position colllisions the bounding box. But this just makes the player stop sharply and unrealisticly.

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Rotations and Origins

    - by Theodore Enderby
    I was hoping someone could explain to me, or help me understand, the math behind rotations and origins. I'm working on a little top down space sim and I can rotate my ship just how I want it. Now, when I get my blasters going it'd be nice if they shared the same rotation. Here's a picture. and here's some code! blast.X = ship.X+5; blast.Y = ship.Y; blast.RotationAngle = ship.RotationAngle; blast.Origin = new Vector2(ship.Origin.X,ship.Origin.Y); I add five so the sprite adds up when facing right. I tried adding five to the blast origin but no go. Any help is much appreciated

    Read the article

  • Scrolling a WriteableBitmap

    - by Skoder
    I need to simulate my background scrolling but I want to avoid moving my actual image control. Instead, I'd like to use a WriteableBitmap and use a blitting method. What would be the way to simulate an image scrolling upwards? I've tried various things buy I can't seem to get my head around the logic: //X pos, Y pos, width, height Rect src = new Rect(0, scrollSpeed , 480, height); Rect dest = new Rect(0, 700 - scrollSpeed , 480, height); //destination rect, source WriteableBitmap, source Rect, blend mode wb.Blit(destRect, wbSource, srcRect, BlendMode.None); scrollSpeed += 5; if (scrollSpeed > 700) scrollSpeed = 0; If height is 10, the image is quite fuzzy and moreso if the height is 1. If the height is a taller, the image is clearer, but it only seems to do a one to one copy. How can I 'scroll' the image so that it looks like it's moving up in a continuous loop? (The height of the screen is 700).

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • Displaying possible movement tiles

    - by Ash Blue
    What's the fastest way to highlight all possible movement tiles for a player on a square grid? Players can only move up, down, left, right. Tiles can cost more than one movement, multiple levels are available to move, and players can be larger than one tile. Think of games like Fire Emblem, Front Mission, and XCOM. My first thought was to recursively search for connecting tiles. This quickly demonstrated many shortcomings when blockers, movement costs, and other features were added into the mix. My second thought was to use an A* pathfinding algorithm to check all tiles presumed valid. Presumed valid tiles would come from an algorithm that generates a diamond of tiles from the player's speed (see example here http://jsfiddle.net/truefreestyle/Suww8/9/). Problem is this seems a little slow and expensive. Is there a faster way? Edit: In Lua for Corona SDK, I integrated the following movement generation controller. I've linked to a Gist here because the solution is around 90 lines of code. https://gist.github.com/ashblue/5546009

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • Are there any preexisting maps for a Minecraft-like level I could use in my engine?

    - by Rishav Sharan
    I am working on a tiny cube-based engine like Minecraft. I was wondering if there is a way for me to get large blocky terrain in a text format that I can use for rendering on my engine? I don't want to start on procedural generation now, I just want a resource where I can get the coord list for a pretty looking terrain. Alternatively, is it possible for me to parse the Minecraft world files and use that data to generate terrain/buildings in my code?

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • how to retain the animated position in opengl es 2.0

    - by Arun AC
    I am doing frame based animation for 300 frames in opengl es 2.0 I want a rectangle to translate by +200 pixels in X axis and also scaled up by double (2 units) in the first 100 frames Then, the animated rectangle has to stay there for the next 100 frames. Then, I want the same animated rectangle to translate by +200 pixels in X axis and also scaled down by half (0.5 units) in the last 100 frames. I am using simple linear interpolation to calculate the delta-animation value for each frame. Pseudo code: The below drawFrame() is executed for 300 times (300 frames) in a loop. float RectMVMatrix[4][4] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 }; // identity matrix int totalframes = 300; float translate-delta; // interpolated translation value for each frame float scale-delta; // interpolated scale value for each frame // The usual code for draw is: void drawFrame(int iCurrentFrame) { // mySetIdentity(RectMVMatrix); // comment this line to retain the animated position. mytranslate(RectMVMatrix, translate-delta, X_AXIS); // to translate the mv matrix in x axis by translate-delta value myscale(RectMVMatrix, scale-delta); // to scale the mv matrix by scale-delta value ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } The above code will work fine for first 100 frames. in order to retain the animated rectangle during the frames 101 to 200, i removed the "mySetIdentity(RectMVMatrix);" in the above drawFrame(). Now on entering the drawFrame() for the 2nd frame, the RectMVMatrix will have the animated value of first frame e.g. RectMVMatrix[4][4] = { 1.01, 0, 0, 2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };// 2 pixels translation and 1.01 units scaling after first frame This RectMVMatrix is used for mytranslate() in 2nd frame. The translate function will affect the value of "RectMVMatrix[0][0]". Thus translation affects the scaling values also. Eventually output is getting wrong. How to retain the animated position without affecting the current ModelView matrix? =========================================== I got the solution... Thanks to Sergio. I created separate matrices for translation and scaling. e.g.CurrentTranslateMatrix[4][4], CurrentScaleMatrix[4][4]. Then for every frame, I reset 'CurrentTranslateMatrix' to identity and call mytranslate( CurrentTranslateMatrix, translate-delta, X_AXIS) function. I reset 'CurrentScaleMatrix' to identity and call myscale(CurrentScaleMatrix, scale-delta) function. Then, I multiplied these 'CurrentTranslateMatrix' and 'CurrentScaleMatrix' to get the final 'RectMVMatrix' Matrix for the frame. Pseudo Code: float RectMVMatrix[4][4] = {0}; float CurrentTranslateMatrix[4][4] = {0}; float CurrentScaleMatrix[4][4] = {0}; int iTotalFrames = 300; int iAnimationFrames = 100; int iTranslate_X = 200.0f; // in pixels float fScale_X = 2.0f; float scaleDelta; float translateDelta_X; void DrawRect(int iTotalFrames) { mySetIdentity(RectMVMatrix); for (int i = 0; i< iTotalFrames; i++) { DrawFrame(int iCurrentFrame); } } void getInterpolatedValue(int iStartFrame, int iEndFrame, int iTotalFrame, int iCurrentFrame, float *scaleDelta, float *translateDelta_X) { float fDelta = float ( (iCurrentFrame - iStartFrame) / (iEndFrame - iStartFrame)) float fStartX = 0.0f; float fEndX = ConvertPixelsToOpenGLUnit(iTranslate_X); *translateDelta_X = fStartX + fDelta * (fEndX - fStartX); float fStartScaleX = 1.0f; float fEndScaleX = fScale_X; *scaleDelta = fStartScaleX + fDelta * (fEndScaleX - fStartScaleX); } void DrawFrame(int iCurrentFrame) { getInterpolatedValue(0, iAnimationFrames, iTotalFrames, iCurrentFrame, &scaleDelta, &translateDelta_X) mySetIdentity(CurrentTranslateMatrix); myTranslate(RectMVMatrix, translateDelta_X, X_AXIS); // to translate the mv matrix in x axis by translate-delta value mySetIdentity(CurrentScaleMatrix); myScale(RectMVMatrix, scaleDelta); // to scale the mv matrix by scale-delta value myMultiplyMatrix(RectMVMatrix, CurrentTranslateMatrix, CurrentScaleMatrix);// RectMVMatrix = CurrentTranslateMatrix*CurrentScaleMatrix; ... // opengl calls glDrawArrays(...); eglswapbuffers(...); } I maintained this 'RectMVMatrix' value, if there is no animation for the current frame (e.g. 101th frame onwards). Thanks, Arun AC

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >