Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 590/1274 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • Looking for a royalty free sci-fi sounding song thats 1:00+ long, and costs <= $5

    - by CyanPrime
    I'm looking for a royalty free sci-fi sounding song thats 1:00+ long, and costs less then, or is $5 usd. I want to have a nice BGM for my engine demo I'm going to release for a game I'm planing on having go commercial. I don't want to spend too much money on it, so my limit is $5 usd. I want it to be at least a 1:00 in length. Where should I look? Or even better, do you have a link to a song that meets the criteria?

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • lwjgl custom icon

    - by melchor629
    I have a little problem with the icon in lwjgl, it doesn't work. I google about it, but i haven't found anything that works for me yet. This is my code for now: PNGDecoder imageDecoder = new PNGDecoder(new FileInputStream("res/images/Icon.png")); ByteBuffer imageData = BufferUtils.createByteBuffer(4 * imageDecoder.getWidth() * imageDecoder.getHeight()); imageDecoder.decode(imageData, imageDecoder.getWidth() * 4, PNGDecoder.Format.RGBA); imageData.flip(); System.err.println(Display.setIcon(new ByteBuffer[]{imageData}) == 0 ? "No se ha creado el icono" : "Se ha creado el icono"); The png file is a 128x128px with transparency. PNGDecoder is from the matthiasmann utility (de.matthiasmann.twl.utils). I'm using Mac OS, 10.8.4 with lwjgl 2.9.0. Thanks :)

    Read the article

  • How to control an actor movement in UDK

    - by Mikalichov
    This might be very basic, but I couldn't find something relevant to what I need (see below). I am working on a very basic thing: a 3D environment with some buildings, and actors walking inside it. It looks like following: I mainly want to manage to have one actor standing around, idling, and another walking around the area. Right now, this is done through matinee + skeletal mesh groups, and forcing a looped animation on the actors: But I realize this is super caveman-level. So I've build an AnimTree, linking the idling and directional animations to the corresponding nodes. But then, I'm stuck. I added the AnimTree in the actors properties, but nothing happens. I've tried MoveToActor, but no success - is there a thing to set to allow an actor to move? Also, I place the actors on the map manually (they are supposed to be unique), should I spawn them instead? Every tutorial I find explains how to use an AnimTree for the player character, which is not what I want. I need a way to move the actors. I tried to look for AI tutorials, but only found UT3 bots-modifications, which is not what I need either. Since I have so much trouble finding how to do this through Kismet, I'm starting to suspect this has to be done through scripting/coding, but I would like to be sure there is no way to do it through Kismet before going that route. Every bit of answer about how to tell an actor something along the lines of "go in that direction as much as you can, then when you hit a wall turn 45° and continue" would be awesome. I'll be happy to move/edit the question if there is any problem with it

    Read the article

  • OpenGL Get Rotated X and Y of quad

    - by matejkramny
    I am developing a game in 2D using LWJGL library. So far I have a rotating box. I have done basic Rectangle collision, but it doesn't work for rotated rectangles. Does OpenGL have a function that returns the vertices of rotated rectangle? Or is there another way of doing this using trigonometry? I had researched how to do this and everything I found was using some matrix that I don't understand so I am asking if there is another way of doing this. For clarification, I am trying to find out the true (rotated) X,Y of each point of the rectangle. Let's say, the first point of a rectangle (top,left) has x=10 y=10.. Width and height is 100 pixels. When I rotate the rectangle using glRotatef() the x and y stay the same. The rotation is happening inside OpenGL. I need to extract the x,y of the rectangle so I can detect collisions properly.

    Read the article

  • Preventing item duplication?

    - by PuppyKevin
    For my game, there's two types of items - stackable, and nonstackable. Nonstackable items get assigned a unique ID that stays with it forever. A character ID is assosicated with the item, as is a state (CHANGED, UNCHANGED, NEW, REMOVED). The character ID and state is used for item saving purposes. Stackable items have one unique ID, as in the entire stack has one unique ID. For example: 5 Potions (stacked ontop of each other) has one unique ID. When dropping a nonstackable item, the state gets set to REMOVED, and the unique ID and state don't change. If picked up by another player, the state gets set to NEW, and the character ID gets changed to the new character's ID. When dropping all items in a stack of stackable items (for example, 5 potions out of 5) - it behaves just like a nonstackable item. When dropping some of a stack of stackable items (for example, 3 potions out of 5)... I really have no clue what to do. The 3 dropped potions have the state of REMOVED, but the same unique ID and character ID. If another player picks it up, it has no choice but to obtain a new unique ID, and its state gets changed to NEW and its character ID to the new one. If the dropping player picks it back up, they'd just be readded to the stack. There's two issues with that though. 1. If the player who dropped the 3 potions picks it back up, there's no way to tell if they legitimately dropped the items, or if they're duped items. 2. If another player picks up the 3 potions (assuming they're duped), there's no way to know if they're duped or not. My question is: How can I create a system that detects duplicated items for both nonstackable and stackable items?

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • I Don't Understand Anything About Randomly Generated Worlds [closed]

    - by Alex Larsen
    What tools do I need to make a Minecraft-like generated world? I heard about Perlin noise and Simplex, but I don't understand anything about them. So far all I found on the internet was a Simplex version for C#, and all it has is functions, and this is what I get: Console.WriteLine(Noise.Generate(SomeNumber, SomeNumber, SumNumber)); Outputs random floats. I'm really lost. I don't understand the whole random generated worlds concept. Can someone help me? And if I use the noise thing I don't understand how to use it.

    Read the article

  • What are some good resources for creating a game engine in XNA?

    - by Glasser
    I'm currently a student game programmer working on an indie project. We have a team of eleven people (five programmers, four artists, and two audio designers) aboard, all working hard to help design this game. We've been meeting for months now and so far we have a pretty buffed out Game Design Document as well as much audio/visual concept art. Our programmers are itching to progress on our own end. Each person in our programming team is well versed in C++, but is very familiar with C#. We have enough experience and skill that we're confident that we will be successful with our game, and we're looking to build our own game engine in XNA as it seems like it would be worth our time and effort in the end. The game itself will be a 2D beat 'em up style game to be released over xbox live and the PC. It's play style will be similar to that of Castle Crashers or Scott Pilgrim vs The World. We want to design the game engine to allow us to better implement our assets into the game as well as to simplify the creation of design elements/mechanics. Currently between our programmers, we have books such as "XNA 4.0" and "Game Coding Complete, Third Edition," but we'd still like more information on both XNA and (especially) building a game engine from scratch. What are any other good books, websites, or resources we could use to further map out and program our game engine?

    Read the article

  • Random Position between ranges.

    - by blakey87
    Does anyone have a good algorithm for generating a random y position for spawning a block, which takes into account a minimum and maximum height, allowing player to to jump on the block. Blocks will continually be spawned, so the player must always be able to jump onto the next block, bearing in mind the minimum position which would be the ground, and the maximum which would the players jump height bearing in mind the ceiling

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Improving the efficiency of my bloom/glow shader

    - by user1157885
    I'm making a neon style game where everything is glowing but the glow I have is kinda small and I want to know if there's an efficient way to increase the size of it other than increasing the pixel sample iterations. Right now I have something like this: float4 glowColor = tex2D(glowSampler, uvPixel); //Makes the inital lines brighter/closer to white if (glowColor.r != 0 || glowColor.g != 0 || glowColor.b != 0) { glowColor += 0.5; } //Loops over the weights and offsets and samples from the pixels based on those numbers for (int i = 0; i < 20; i++) { glowColor += tex2D(glowSampler, uvPixel + glowOffsets[i] + 0.0018) * glowWeights[i]; } finalColor += glowColor; for the offsets it moves up, down, left and right (5 times each so it loops over 20 times) and the weights just lower the glow amount the further away it gets. The method I was using before to increase it was to increase the number of iterations from 20 to 40 and to increase the size of the offset/weight array but my computer started to have FPS drops when I was doing this so I was wondering how can I make the glow bigger/more vibrant without making it so CPU/Gcard intensive?

    Read the article

  • Using glReadBuffer/glReadPixels returns black image instead of the actual image only on Intel cards

    - by cloudraven
    I have this piece of code glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer ); Which works just perfectly in all the Nvidia and AMD GPUs I have tried, but it fails in almost every single Intel built-in video that I have tried. It actually works in a very old 945GME, but fails in all the others. Instead of getting a screenshot I am actually getting a black screen. If it helps, I am working with the Doom3 Engine, and that code is derived from the built-in screen capture code. By the way, even with the original game I cannot do screen capture on those intel devices anyway. My guess is that they are not implementing the standard correctly or something. Is there a workaround for this?

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • Best way to go for simple online multi-player games?

    - by Mr_CryptoPrime
    I want to create a trivia game for my website. The graphic design does not have to be too fancy, probably no more advanced than a typical flash game. It needs to be secure because I want users to be able to play for real money. It also needs to run fast so users don't spend their time frustrated with game freezing. Compatibility, as with almost all online products, is key because of the large target market. I am most acquainted with Java programming, but I don't want to do it in Java if there is something much better. I am assuming I will have to utilize a variety of different languages in order for everything to come together. If someone could point out the main structure of everything so I could get a good start that would be great! 1) Language choice for simple secure online multiplayer games? 2) Perhaps use a database like MySQL, stored on a secure server for the trivia questions? 3) Free educational resources and even simpler projects to practice? Any ideas or suggestions would be helpful...Thanks!

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • How do I draw a scrolling background?

    - by droidmachine
    How can I draw background tile in my 2D side-scrolling game? Is that loop logical for OpenGL es? My tile 2400x480. Also I want to use parallax scrolling for my game. batcher.beginBatch(Assets.background); for(int i=0; i<100; i++) batcher.drawSprite(0+2400*i, 240, 2400, 480, Assets.backgroundRegion); batcher.endBatch(); UPDATE And thats my onDrawFrame.I'm sending deltaTime for fps control. public void onDrawFrame(GL10 gl) { GLGameState state = null; synchronized(stateChanged) { state = this.state; } if(state == GLGameState.Running) { float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); screen.present(deltaTime); } if(state == GLGameState.Paused) { screen.pause(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } if(state == GLGameState.Finished) { screen.pause(); screen.dispose(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } }

    Read the article

  • Calculating the rotational force of a 2D sprite

    - by Jon
    I am wondering if someone has an elegant way of calculating the following scenario. I have an object of (n) number of squares, random shapes, but we will pretend they are all rectangles. We are dealing with no gravity, so consider the object in space, from a top down perspective. I am applying a force to the object at a specific square (as illustrated below). How do I calculate the rotational angle, based on the force being applied, at the location being applied. If applied in the center square, it would go straight. How should it behave the further I move from the center? How do I calculate the rotational velocity?

    Read the article

  • How was 20Q made?

    - by Dan the Man
    Ever since I was a kid, I've wondered how they made the 20Q electronic game. In this game, which is it's on device, you think of an object, thing, or animal (e.g. a potato or a donkey), once you mentally choose your thing, the device goes through a series of questions such as: Is it larger than a loaf of bread? Is it found outdoors? Is it used for recreation? For each of the questions you can answer yes, no, maybe, or unknown. The way I've always thought of it to work was with immense, nested conditionals (if statements). But, I don't think that would be very likely as it would be terribly difficult to understand while coding it. I'm not looking for a discussion as SE doesn't allow it; I'm looking for concrete knowledge or solutions.

    Read the article

  • Android 2D terrain scrolling

    - by Nikola Ninkovic
    I want to make infinite 2D terrain based on my algorithm.Then I want to move it along Y axis (to the left) This is how I did it : public class Terrain { Queue<Integer> _bottom; Paint _paint; Bitmap _texture; Point _screen; int _numberOfColumns = 100; int _columnWidth = 20; public Terrain(int screenWidth, int screenHeight, Bitmap texture) { _bottom = new LinkedList<Integer>(); _screen = new Point(screenWidth, screenHeight); _numberOfColumns = screenWidth / 6; _columnWidth = screenWidth / _numberOfColumns; for(int i=0;i<=_numberOfColumns;i++) { // Generate terrain point and put it into _bottom queue } _paint = new Paint(); _paint.setStyle(Paint.Style.FILL); _paint.setShader(new BitmapShader(texture, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)); } public void update() { _bottom.remove(); // Algorithm calculates next point _bottom.add(nextPoint); } public void draw(Canvas canvas) { Iterator<Integer> i = _bottom.iterator(); int counter = 0; Path path = new Path(); path.moveTo(0, _screen.y); while (i.hasNext()) { path.lineTo(counter, _screen.y-i.next()); counter += _columnWidth; } path.lineTo(_screen.x, _screen.y); path.lineTo(0, _screen.y); canvas.drawPath(path2, _paint); } } The problem is that the game is too 'fast', so I tried with pausing thread with Thread.sleep(50); in run() method of my game thread but then it looks too torn. Well, is there any way to slow down drawing of my terrain ?

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >