Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 498/1027 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Can I use copyrighted images and music in my game? [duplicate]

    - by Aman Grover
    This question already has an answer here: Using modified copyrighted music in non-commercial games 3 answers I am working on my first ever project , and I have used some copyrighted images(images taken from some other well recognized games). Is it even possible that I edit the images a little bit and then use them as royalty free images? If not, then can anyone tell me some web links from where I can get royalty free images?

    Read the article

  • What game systems exist which uses camera input?

    - by Marc Pilgaard
    The group and I is in the middle of a semester project where we are currently researching on which game systems are using camera as input or as an interactive medium? We would like some help listing some of the game systems which uses camera input, as it seems hard to find other examples. Currently we know that webcam browser games uses camera input (Newgrounds webcam games), as well as the xbox kinect. I know this questions seems rather vague, though I still hope some people is capable of helping.

    Read the article

  • How many achievements should I include, and of what challenge?

    - by stephelton
    I know this question is fairy broad and subjective, but I'm wondering if there's been any published research into what an optimal number of achievements is and what kind of challenge they should present. The game this question directly relates to is a shoot-em-up, but an ideal answer is fairly theoretical. If there are there are too few achievements, or they are not challenging, I would expect they would fail in their goal to keep people playing. If there are too many, or they are unreasonably difficult, I would expect people to quickly give up. I personally witnessed the latter happening in Starcraft 2; a section of the achievements would have you win hundreds of games against their AI opponents (boring!)

    Read the article

  • Finding inspiration / help for making up (weapon) names

    - by Rookie
    I'm really bad with words, especially with English words. Currently I'm struggling to make a good weapon names for my game, it needs to display the weapon functionality (weak/strong/fast/ballistic etc) correctly as well. For example the best weapon in a (futuristic) game cannot be called just with the name "Laser", it's just too boring, right? Are there any tools, websites or anything that helps me finding good names for weapons? (or anything else similar). I was thinking to use scientific names, but noticed that they are really hard to write, and they get very long, and I also lack information about science, I only know I could use the atomic sub-particles names in the weapons for example. How do I get started with becoming good with making up names? (this could apply in generally to any naming problems).

    Read the article

  • Has an open console any chance to give more strength to the indie game world ?

    - by jokoon
    I have heard about the GPX, but i don't really think the embedded market is mature enough in terms of performance, but what about the home console market ? I'm not talking about last-generation graphics, because that would be economically impossible, but what about an hardware as fast as a playstation 2/Xbox 1/Gamecube ? For games, the trick would be to ask some editors to recompile their best sellers for the new machine: those games being from the PSX age or even older console generations, I think this would have a very low cost job and they could still make some good profit, but I need to know if this is doable technically, considering the architecture which can be quite exotic. Do you think it would be a viable project to talk about to investors ?

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • Getting a texture from a renderbuffer in OpenGL?

    - by Rushyo
    I've got a renderbuffer (DepthStencil) in an FBO and I need to get a texture from it. I can't have both a DepthComponent texture and a DepthStencil renderbuffer in the FBO, it seems, so I need some way to convert the renderbuffer to a DepthComponent texture after I'm done with it for use later down the pipeline. I've tried plenty of techniques to grab the depth component from the renderbuffer for weeks but I always come out with junk. All I want at the end is the same texture I'd get from an FBO if I wasn't using a renderbuffer. Can anyone post some comprehensive instructions or code that covers this seemingly simple operation? EDIT: Linky to an extract version of the code http://dl.dropbox.com/u/9279501/fbo.cs Screeny of the Depth of Field effect + FBO - without depth(!) http://i.stack.imgur.com/Hj9Oe.jpg Screeny without Depth of Field effect + FBO - depth working fine http://i.stack.imgur.com/boOm1.jpg

    Read the article

  • How should I determine direction from a phone's orientation & accelerometer?

    - by Manoj Kumar
    I have an Android application which moves a ball based on the orientation of the phone. I've been using the following code to extract the data - but how do I use it to determine what direction the ball should actually travel in? public void onSensorChanged(int sensor, float[] values) { // TODO Auto-generated method stub synchronized (this) { Log.d("HIIIII :- ", "onSensorChanged: " + sensor + ", x: " + values[0] + ", y: " + values[1] + ", z: " + values[2]); if (sensor == SensorManager.SENSOR_ORIENTATION) { System.out.println("Orientation X: " + values[0]); System.out.println("Orientation Y: " + values[1]); System.out.println("Orientation Z: " + values[2]); } if (sensor == SensorManager.SENSOR_ACCELEROMETER) { System.out.println("Accel X: " + values[0]); System.out.println("Accel Y: " + values[1]); System.out.println("Accel Z: " + values[2]); } } }

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • Why my collision detection is not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him standstill on the wall? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; } void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; }

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • Using box2d DrawDebugData with multi layer scene ?

    - by Mr.Gando
    In my Game, a Scene is composed by several layers. Each layer has different camera transformations. This way I can have a layer at z=3 (GUI), z=2 (Monsters), z=1 (scrolling background), and this 3 layers compose my whole Scene. My render loop looks something like: renderLayer() applyTransformations() renderVisibleEntities() renderChildLayers() end If I call DrawDebugData() in the render loop, the whole b2world debug data will be rendered once for each layer in my scene, this generates a mess, because the "debug boxes" get duplicated, some of them get the camera transformations applied and some of them don't, etc. What I would like to do, would be to make DrawDebugData to draw only certain debug boxes. In that way, I could call something like b2world->DrawDebugDataForLayer(int layer_id) and call that on each layer like : renderLayer() applyTransformations() renderVisibleEntities() //Only render my corresponding layer debug data b2world->DrawDebugDataForLayer(layer_id) renderChildLayers() end Is there a way to subclass b2World so I could add this functionality ( specific to my game ) ? If not, what would be the best way to achieve this (Cocos2d uses a similar scene graph approach and box2d, but I'm not sure if debugDraw works in Cocos2d... ) Thanks

    Read the article

  • Non-unique display names?

    - by Davy8
    I know of at least big title game (Starcraft II) that doesn't require unique display names, so it would seem like it can work in at least some circumstance. Under what situations does allowing non-unique display names work well? When does it not work well? Does it come down to whether or not impersonation of someone else is a problem? The reasons I believe it works for Starcraft II is that there isn't any kind of in-game trading of virtual goods and other than "for kicks" there isn't much incentive to impersonate someone else in the game. There's also ladder rankings so even trying to impersonate a pro is easily detectable unless you're on a similar skill level. What are some other cases where it makes sense to specifically allow or disallow duplicate display names? (I have no idea what to tag this as. I went with game-design because I needed at least 1 tag and I don't have rep to create new ones yet.)

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Normal vector of a face loaded from an FBX model during collision?

    - by Corey Ogburn
    I'm loading a simple 6 sided cube from a UV-mapped FBX model and I'm using a BoundingBox to test for collisions. Once I determine there's a collision, I want to use the normal vector of the collided surface to correct the movement of whatever collided with the cube. I suppose this is a two-part question: 1) How can I determine which face of the cube was collided with in a collision? 2) How can I get the normal vector of that surface?

    Read the article

  • Rotation matrix for a 3D vector

    - by Shashwat
    I have a direction vector on which I have to apply some rotation to align it to positive z-axis. To use Matrix.CreateRotationX(angle) of XNA, I need the angle for which I'd have to compute cos or tan inverse. I think this is a complex task to do. Also, eventually those are also converted to sin(angle) and cos(angle) in the matrix. Is there any inbuilt way to create rotation matrix from a 3D vector? However, I can write the function but still asking if there is one already there.

    Read the article

  • Has a multi player graphic adventure* ever been made?

    - by Petruza
    By graphic adventure, I mean point & click LucasArts-type games. Those games have a mostly linear structure in nature, and usually don't offer as many variants as other games types like action, rpg, strategy, which makes this genre difficult to implement a multi-player feature. I'd like to know if there has been any attempts on doing such a thing, and if it would be viable, as players going offline or leaving a game in the middle would affect significantly the other players' game.

    Read the article

  • Buffer System For Items

    - by Ohmages
    I am going to reference this image of what I want to accomplish in JavaScript. This is the Diablo buffer system. This question may be a bit advanced (or possibly not even allowed). But I was wondering how you might go about implementing this type of system in a JavaScript game. Currently to implement such a system in JavaScript escapes me, and I am turning to SO to get some suggestions, ideas, and hopefully some insight in how I could accomplish this without being to costly on the CPU. Some thoughts of mine for implementing such a system would be to: Create DIVS within a DIV that hold each position of the inventory Go through each item you own in a container and see which DIV it belongs to Make said item images the DIVs image This type of system might possibly work if ALL items were 1x1, but for this example its not going to work out. I am at a complete lost of ideas how to even accomplish this. Although, maybe rendering directly to the canvas and checking mouse cords could work, there would more than likely be A HUGE annoyance when checking if other items are overlapping each other (meaning you cant place the item down, and possibly switching item with the cursor item ). That said, what am I left with? Do I need to makeshift my own hack system with messy code, or is there some source out there (that I don't know about) that has replicated this type of system in their own game. I would be very grateful to get some replies on how you might go about doing this, and will accept answers that can logically explain how you might implement such a system (code is not required). P.S. Id like to use pure JavaScript, and nothing else (even though it might be "reinventing the wheel", I also like to learn).

    Read the article

  • OpenGL ES 2.0 gluUnProject

    - by secheung
    I've spent more time than I should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code: public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using OpenGL ES 2.0 code.

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • What causes the iOS OpenGLES driver to allocate extra memory?

    - by Martin Linklater
    I'm trying to optimize the memory usage of our iOS game and I'm puzzled about when/why the iOS GLES driver allocates extra memory at runtime... When I run our game through Instruments with the OpenGL ES Driver instrument the gartUsedBytes value can fluctuate quite wildly. We preload all our textures and build the buffer objects up front, so it's not the game engine requesting extra memory from GL. Currently we are manually requesting around 50MB of GL memory, yet the gartUsedBytes value sits at around 90MB most of the time, peaking at 125MB from time to time. It seems to be linked to what you are rendering that frame - our PVS only submits VBO's for visible meshes. Can anyone shed some light on what the driver is doing in the background ? Like I said earlier, all our game engine allocations are done on level load, so in theory there shouldn't be any fluctuation on GL memory usage while the level is running. Thanks.

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Story and news-feed ideas for social network games

    - by arpine
    I am currently working on a educational and fun 2-in-1 game. As I am not a professional, I need advice on story and news-feed. The goal is simple-get richer, the story is about a worker who is trying to get over his/her financial problems and become rich. During the whole gaming process there is a news-feed (every day there are a couple of fresh news about what is going on). The news are fresh and individual so I need to write about 2000 pieces of news for 2 year gaming, maybe more. The problem is that I am not sure whether repetitive news can interest in this game. What can be done to make the news-making process easier but not boring from the point of view of the player?

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >