Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 539/1758 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Self learning automated movement

    - by Super1
    I am trying to make a small demo in Javascript, I have a black border and a car the car travels randomly and a line is drawn of its trail. When the user click inside the area it creates an object (we'll call this the wall). If the car hits the wall then it goes back 3 paces and tries a different route. When its hit the wall it needs to log down its location so it does NOT make that mistake again. Here is my example: http://jsfiddle.net/Jtq3E/ How can I get the car to move by itself and create a trail?

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • XNA Diffuse Shader Issue. Edge lighting problem. Image Attached

    - by adtither
    As you can see in this image the diffuse shading is working correctly in some places but in other places such as the the bottom of the sphere you can see the squares/triangles of the mesh. Any idea what would be causing this? Let me know if you need anymore information related to code. I can upload my normals calculations and shader effect if required. EDIT: Here's a link to the shader I'm using http://pastebin.com/gymVc7CP Link to normals calculations: http://pastebin.com/KnMGdzHP Seems to be an issue with edge lighting. Can't seem to see where I'm going wrong with the normals calculations though.

    Read the article

  • Creating practically solvable 15 puzzle inputs

    - by Ashwin
    I am now developing a 15 puzzle game. I know the method to detect unsolvable puzzles. But unlike 8-puzzle, solution for 15-puzzle takes quite long time for some input states and can be solved within 5 seconds some other set of input states. Now the problem is that I cannot give the user(the player), a problem for which the solution takes more than 10 seconds(if he/she chooses to see the solution). So what I want is that when I initially shuffle the puzzle, I want to only present those puzzles which can be solved within 10 seconds. There must be some way to determine the hardness of the puzzle. I tried searching the net but could not find it. Does anyone know a way of determining the hardness of a puzzle? NOTE : I am using A* algorithm to find out the solution on a computer with 3GB RAM and 2.27GHZ processor.

    Read the article

  • Making a perfect map (not tile-based)

    - by Sri Harsha Chilakapati
    I would like to make a map system as in the GameMaker and the latest code is here. I've searched a lot in google and all of them resulted in tutorials about tile-maps. As tile maps do not fit for every type of game and GameMaker uses tiles for a different purpose, I want to make a "Sprite Based" map. The major problem I had experienced was collision detection being slow for large maps. So I wrote a QuadTree class here and the collision detection is fine upto 50000 objects in the map without PixelPerfect collision detection and 30000 objects with PixelPerferct collisions enabled. Now I need to implement the method "isObjectCollisionFree(float x, float y, boolean solid, GObject obj)". The existing implementation is becoming slow in Platformer games and I need suggestions on improvement. The current Implementation: /** * Checks if a specific position is collision free in the map. * * @param x The x-position of the object * @param y The y-position of the object * @param solid Whether to check only for solid object * @param object The object ( used for width and height ) * @return True if no-collision and false if it collides. */ public static boolean isObjectCollisionFree(float x, float y, boolean solid, GObject object){ boolean bool = true; Rectangle bounds = new Rectangle(Math.round(x), Math.round(y), object.getWidth(), object.getHeight()); ArrayList<GObject> collidables = quad.retrieve(bounds); for (int i=0; i<collidables.size(); i++){ GObject obj = collidables.get(i); if (obj.isSolid()==solid && obj != object){ if (obj.isAlive()){ if (bounds.intersects(obj.getBounds())){ bool = false; if (Global.USE_PIXELPERFECT_COLLISION){ bool = !GUtil.isPixelPerfectCollision(x, y, object.getAnimation().getBufferedImage(), obj.getX(), obj.getY(), obj.getAnimation().getBufferedImage()); } break; } } } } return bool; } Thanks.

    Read the article

  • Blender: How to "meshify" an object I made from Bezier curves

    - by capcom
    I made a star shape using Bezier curves, and extruded it (see pic below): What I want to do is give it a rounder look - not just around the edges by using beveling. I want it to kind of look like this (well, that shape anyway): How would I go about doing this? Please keep in mind that I am extremely new to Blender. I thought that I could somehow turn this star into those default shapes that have tonnes of squares which I could pull out, and apply a mirror to it so that the same thing happens on both sides. I really don't know how to do it, and would appreciate your help.

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • OpenGL ES 2.0 example for JOGL

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • How to keep balance / Unlock items / achievement rules

    - by Mark Knol
    I'm working on an engine for a game, too learn javascript and just because its fun. I'm a flashdeveloper, I know how to build websites. Now making games is a different challenge, javascript is a challenge, but I'd love to learn how to structure code and what patterns are common. I dont mind if the game ever finish, I'm mostly interested in the programming part of it. I dont have a particular endresult in mind, so I'll see where it takes me. I currently have a system where you can buy items. The items cost a specified amount of gold, silver, diamonds etc. When you have selected and bought the item, it takes time before getting rewarded. When time is over, you are getting rewarded with other properties (gold, energy, diamonds). For example, you can buy an apple for 50gold, It takes a minute, you get rewarded with 75energy. Or if you take a run, it cost 50energy, it takes 5minutes, reward is 25gold and 25silver. These definitions is what i call actions. Currently I already have a system where this already works and I can define as much actions with as much properties as I want. The definitions I have kinda looks like this: {id:101, category:544, onInit:{gold:-75}, onComplete:{energy:75}, time:2000, name:"Apple", locked: false} {id:102, category:544, onInit:{gold:-135}, onComplete:{energy:145}, time:2000, name:"Banana", locked: false} {id:106, category:302, onInit:{energy:-50, power: -25}, onComplete:{gold:100, diamonds:2}, time:10000, name:"Run", locked: false} {id:107, category:302, onInit:{energy:-70, silver: -55}, onComplete:{gold:100}, time:10000, name:"Dance", locked: false} {id:108, category:302, onInit:{energy:-230, power: -355}, onComplete:{gold:70, silver:70}, time:10000, name:"Fitness", locked: false} Now, I would love to add a system where I can lock/unlock the actions using achievement rules. Lets say, if you buy 10 apples, you unlock a new action, like bananas which cost more, and reward more. In the future I maybe want to restrict achievements and actions to levels. I am kinda stuck how to structure this. I have 2 questions: Which patterns are used to define achievements? How/where are they defined? Should it be part of the action, or should it be a separate controller? Is it a good idea to register all completed actions to it? I think I want multiple types of achievement rules, Id love to hear some ideas how to develop it. How do you create/find a good balance, so the user does not get stuck or can cheat by repeat a pattern of actions to get too much rewards. I know there is not a simple answer and i'm lacking of a good game-concept, but I wonder if anyone created such a game and how you dealed and played with it.

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • Simple project - make a 3D box tumble and fall to the ground [closed]

    - by Dominic Bou-Samra
    Possible Duplicate: Resources to learn programming rigid body simulation Hi guys, I want to try learning rigid-body dynamic simulation. I have done a fluid and cloth simulation before, but never anything rigid. My maths knowledge is limited in that I don't know the notation that well. Are there any good cliff-notes, tutorials, guides on how I would accomplish a simple task like this? I don't want a super complex pdf that's only a little relevant. Thanks.

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >