Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 488/1022 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • How do I render an animation where some frames appear twice?

    - by hustlerinc
    I am animating a sprite. The sprite has 7 different frames, but the animation is 10 frames long. This is because 3 of the original frames appear twice in the animation: 3 -> 4 -> 5 -> 6 -> 4 -> 3 -> 2 -> 1 -> 0 -> 2 Frames 2, 3 and 4 appear twice. This avoids having to store duplicate frames in the spritesheet. How can I render the animation in this sequence with repeated frames?

    Read the article

  • How to solve problems with movement in simple tile based multiplayer game?

    - by Murlo
    I'm making a simple tile based 2D multiplayer game in JavaScript using socket.io where you can move one tile every 200 ms. The two solutions I've tried are as follows: The client sends "walk one tile north" every 200 ms. Problem: People can easily hack the client to send the action more often. The client sends "walking north" and "stopped walking". Problem: Sometimes the player moves extra steps when "stopped walking" doesn't arrive in time. Do you know a way around these problems or is there a better way to do it? EDIT: Regarding the first solution I've tried adding validation on the server to check if it has been 200 ms since last movement. The problem is that latency still encourages people just to spam the action as much as possible, giving them an unfair advantage.

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • How to manage Areas/Levels in an RPG?

    - by Hexlan
    I'm working on an RPG and I'm trying to figure out how to manage the different levels/areas in the game. Currently I create a new state (source file) for every area, defining its unique aspects. My concern is that as the game grows the number of class files will become unmanageable with all the towns, houses, shops, dungeons, etc. that I need to keep track of. I would also prefer to separate my levels from the source code because non-programmer members of the team will be creating levels, and I would like the engine to be as free from game specific code as possible. I'm thinking of creating a class that provides all the functions that will be the same between all the levels/areas with a unique member variable that can be used to look up level specifics from data. This way I only need to define level/area once in the code, but can create multiple instances each with its own unique aspects provided by data. Is this a good way to go about solving the issue? Is there a better way to handle a growing number of levels?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • Handling player/background movements in 2D games

    - by lukeluke
    Suppose you have your animated character controlled by the player and a 2D world (like the old 2D side-scrolling games). When the user press right on the keyboard, the background is moved to the right. If the path is always horizontal, this is simple to do (incrementation/decrementation of the x-coordinate). But suppose that the path is instead a polygonal chain. My questions are: How do you move the background? How do you move the background if the game objects are managed with a physics engine like box2D?

    Read the article

  • Coordinate and positioning problem on iOS with cocos2d-x

    - by Vexille
    I'm using cocos2d-x alongside with Marmalade and running some tests and tutorials before starting an actual project with them. So far things are working reasonably well on the windows simulator, Android and even on Blackberry's Playbook, but on iOS devices (iPhone and iPad) the positioning seems to be off. To make things clearer, I put together a scene that just draws an image in the middle of the screen. It worked as expected on everything else, but this is the result I got on an iPhone: To get the coordinates for the center of the screen I'm using the VisibleRect class from the TestCpp sample. It just uses sharedOpenGLView to get the visible size and visible origin, and calculate the center from that. CCSprite* test = CCSprite::create("Ball.png", CCRectMake(0, 0, 80, 80) ); test->setPosition( ccp(VisibleRect::center().x, VisibleRect::center().y) ); this->addChild(test); Also I have a noBorder policy set on AppDelegate: CCEGLView::sharedOpenGLView()->setDesignResolutionSize(designSize.width, designSize.height, kResolutionNoBorder); One funny thing is that I tried to deploy the TestCpp sample project to some iOS devices and it worked reasonably well on the iPhone, but on the iPad the application was only being drawn on a small portion of the screen - just like what happened on the iPhone when I tried using the ShowAll policy.

    Read the article

  • Cocos2d rotating sprite while moving with CCBezierBy

    - by marcg11
    I've done my moving actions which consists of sequences of CCBezierBy. However I would like the sprite to rotate by following the direction of the movement (like an airplane). How sould I do this with cocos2d? I've done the following to test this out. CCSprite *green = [CCSprite spriteWithFile:@"enemy_green.png"]; [green setPosition:ccp(50, 160)]; [self addChild:green]; ccBezierConfig bezier; bezier.controlPoint_1 = ccp(100, 200); bezier.controlPoint_2 = ccp(400, 200); bezier.endPosition = ccp(300,160); [green runAction:[CCAutoBezier actionWithDuration:4.0 bezier:bezier]]; In my subclass: @interface CCAutoBezier : CCBezierBy @end @implementation CCAutoBezier - (id)init { self = [super init]; if (self) { // Initialization code here. } return self; } -(void) update:(ccTime) t { CGPoint oldpos=[self.target position]; [super update:t]; CGPoint newpos=[self.target position]; float angle = atan2(newpos.y - oldpos.y, newpos.x - oldpos.x); [self.target setRotation: angle]; } @end However it rotating, but not following the path...

    Read the article

  • Heightmap and Textures

    - by Robert
    Im trying to find the "best way" to apply a texture to a heightmap with opengl 3.x. Its really hard to find something on google because tutorials are olds and they're all using different methods, im really lost and i dont know what to use at all. Here is my code that generates the heightmap (its basic) float[] vertexes = null; float[] textureCoords = null; for(int x = 0; x < this.m_size.width; x++) { for(int y = 0; y < this.m_size.height; y++) { vertexes ~= [x, 1.0f, y]; textureCoords ~= [cast(float)x / 50, cast(float)y / 50]; } } As you can see, i dont know how to apply the texture at all (i was using / 50 for my tests). Result of that code : I would like to have something very basic like : (you can find more pics in his blog) Edit : my texture size is 1024x1024.

    Read the article

  • DirectX 11 Constant Buffers vs Effect Framework

    - by Alex
    I'm having some trouble understanding the differences between using constant buffers or using the effect framework of DirectX11 for updating shader constants. From what I understand they both do exactly the same thing, although from reading the documentation it appears as if using effects is meant to be 'easier'. However they seem the same to me, one uses VSSetConstantBuffers and the other GetConstantBufferByName. Is there something I'm missing here?

    Read the article

  • Knowing state of game in real time

    - by evthim
    I'm trying to code a tic tac toe game in java and I need help figuring out how to efficiently and without freezing the program check if someone won the game. I'm only in the design stages now, I haven't started programming anything but I'm wondering how would I know at all times the state of the game and exactly when someone wins? Response to MarkR: (note: had to place comment here, it was too long for comment section) It's not a homework problem, I'm trying to get more practice programming GUI's which I've only done once as a freshman in my second introductory programming course. I understand I'll have a 2D array. I plan to have a 2D integer array where x would equal 1 and o would equal 0. However, won't it take too much time if I check after every move if someone won the game? Is there a way or a data structure or algorithm I can use so that the program will know the state (when I say state I mean not just knowing every position on the board, the int array will take care of that, I mean knowing that user 1 will win if he places x on this block) of the game at all times and thus can know automatically when someone won?

    Read the article

  • PhysX Capsule Character Controller floating above ground

    - by Jannie
    I am using PhysX Version 3.0.2 in the simulation package I'm working on, and I've encountered some bizarre behavior with the capsule character controller. When I set the controller's height and radius to the appropriate values (r = 0.25, h = 1.86)it behaves correctly (moving along the ground, colliding with other objects, and so on) except that the capsule itself is floating above the ground. The actor will then bump his head when trying to get through a door, since the capsule is the correct height but also floating above the ground. This image should illustrate what I'm going on about: One can clearly see that the rest of the scene has their collision bodies wrapped correctly, it's just the capsule that's going wrong! The stop-gap I've implemented is creating a smaller capsule and giving it an offset, but I need to implement ray-picking for the controller next so the capsule has to surround the character model properly. Here's my character creation code (with height = 1.86f and radius = 0.25f): NxController* D3DPhysXManager::CreateCharacterController( std::string l_stdsControllerName, float l_fHeight, float l_fRadius, D3DXVECTOR3 l_v3Position ) { NxCapsuleControllerDesc l_CapsuleControllerDescription; l_CapsuleControllerDescription.height = l_fHeight; l_CapsuleControllerDescription.radius = l_fRadius; l_CapsuleControllerDescription.position.set( l_v3Position.x, l_v3Position.y, l_v3Position.z ); l_CapsuleControllerDescription.callback = &this->m_ControllerHitReport; NxController* l_pController = this->m_pControllerManager->createController( this->m_pScene, l_CapsuleControllerDescription ); this->m_pControllerMap.insert( l_ControllerValuePair( l_stdsControllerName, l_pController ) ); return l_pController; } Any help at all would be appreciated, I just can't figure this one out! P.S. I've found a couple of (rather old) threads describing the same issue, but it seems they couldn't find a solution either. Here are the links: http://forum-archive.developer.nvidia.com/index.php?showtopic=6409 http://forum-archive.developer.nvidia.com/index.php?showtopic=3272 http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=23003

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • I want to learn to program in SDL C++where do i start? I want to learn only what i need to to start making 2d games [on hold]

    - by user2644399
    Lazyfoo of Lazyfoo.net of the SDL 2d tutorial wrote that in order for me to start game programming in SDL, I need to know these concepts well; Operators, Controls, Loops, Functions, Structures, Arrays, References, Pointers, Classes, Objects how to use a template and Bitwise and/or. I want to know the fastest way to learn as much as I need of basic c++ that would allow me to make 2d games. Thanks in advance.

    Read the article

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Bridge made out of blocks at an angle

    - by Pozzuh
    I'm having a bit of trouble with the math behind my project. I want the player to be able to select 2 points (vectors). With these 2 points a floor should be created. When these points are parallel to the x-axis it's easy, just calculate the amount of blocks needed by a simple division, loop through that amount (in x and y) and keep increasing the coordinate by the size of that block. The trouble starts when the 2 vectors aren't parallel to an axis, for example at an angle of 45 degrees. How do I handle the math behind this? If I wasn't completely clear, I made this awesome drawing in paint to demonstrate what I want to achieve. The 2 red dots would be the player selected locations. (The blocks indeed aren't square.) http://i.imgur.com/pzhFMEs.png.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >