Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 301/540 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • Cocos2d update leaking memory

    - by Andrey Chernukha
    I have a weird issue - my app is leaking memory on device only, not on a simulator. It is leaking if i schedule update method anywhere, on any scene. It is leaking despite update method is empty, there's nothing inside it except NSLog. How can it be? I have even scheduled update on the very first scene where it seems there's nothing to leak, and scheduled another empty and it's leaking or not leaking but allocating something, the result is the same - the volume of the memory consumed is increasing and my app is crashing soon. I can detect the leakage via using Instruments-Memory-Activity Monitor or with help of following function: void report_memory(void) { struct task_basic_info info; mach_msg_type_number_t size = sizeof(info); kern_return_t kerr = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size); if( kerr == KERN_SUCCESS ) { NSLog(@"Memory in use (in bytes): %u", info.resident_size); } else { NSLog(@"Error with task_info(): %s", mach_error_string(kerr)); } } Can anyone explain me what's going on?

    Read the article

  • How to implement explosion in OpenGL?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Is it safe StringToHash() to use in Unity?

    - by Sebastian Krysmanski
    I'm currently browsing through the Unity tutorials and saw that they're recommending to use Animator.StringToHash("some string") to created unique ids for animation properties (see here). Since I'm a programmer, to me the word "hash" doesn't represents something unique. Like the Java documentation for hashValue() states: It is not required that if two objects are unequal [...], then calling the hashCode method on each of the two objects must produce distinct integer results. So, according to this (and my definition of "hash"), two strings may have the same hash value. (You can also argue that there are an infinite number of possible strings but only 2^32 possible int values.) So, is there a possibility that StringToHash() will give me an id that actually belongs to another property (than the one I requested the hash for)?

    Read the article

  • Biome Transition in a Grid & Borderless World

    - by API-Beast
    I have a universe: a list of "Systems", each with their own center, type and radius. A small part of such a universe could look like this: Systems: Can be very close to a different system, e.g. overlap Can be inside another, much bigger system Can be very far away from any other systems Spawn system specific entities and particles inside the system radius Have some properties like background color So far so good. However, the player can fly around freely, inside and outside of systems, in real time. How do I interpolate and determine things like the background color now, depending on camera position? E.g. if you are halfway between a green and a red system you should see a background halfway between red and green, or if you are inside a lilac system near the center and at the border of a green system you should get a mostly lilac background etc.

    Read the article

  • Estimated budget for RockBand 3 character creator feature [closed]

    - by milesmeow
    I want to get an idea of the budget required to make something akin to the Rock Band 3 character creator. We won't have the same level of detail for creating the base character, i.e. no face feature customization. We essentially want some very basic body sizing and want a bunch of clothing and accessories that the characters can try on. The clothing and accessories need to adapt and fit to the body types/shapes. The character should also support movement...and the clothes should move with the body. I've asked a similar question here regarding the development effort but not necessarily a budget. Do you think it's a $1M job? Most of the effort will go into creating the character models and the clothing models. The rest of the effort will go towards the user interface to navigate the customization features.

    Read the article

  • One Step-Ahead A-Star

    - by Jonathan Dickinson
    I am attempting to create a server-centric RTS (as opposed to usual parallel synchronised simulation route of most RTS games today) - however I am still leveraging the discreet N-turns-ahead paradigm discussed by one of the AOE developers on Gamasutra. I have [possibly questionably?] decided that the path finding should only ever find the next cell the entity needs to move to, and was wondering if anyone has any clever ideas on how to optimize the algorithm for this specific scenario - or any other ideas on how to keep the pathfinding as lean as possible on the server. I have investigated a few possible algorithms but could only come up with one appropriation: Tiered A-Star - Relatively large T1 tiles, work out (and cache) each cell as you enter it. Other than that: doing the full A-Star pass and caching the entire path, which might use too much memory if a large amount of units are present. I know about the existence of naive progressive pathfinding algorithms (if you hit a block, turn in the direction closer to your target etc.) but they suffer from infinite feedback loops - and very poor pathing even if visited blocks are memorised. Not an option. Many thanks.

    Read the article

  • i am going to start learning to develop games, and have a very importent question

    - by levi s.
    so i am going to be starting to start learning to develop games soon, and i have already learned the basics of java. before i really go balls out. am i making a bad choice of language? should i stop now and move to c++ or c#? will that hinder me? is java going to hinder me worse? im kinda having regrets on saying "oh hey minecraft was made in java, it must be best!" im mainly asking, what should i do?

    Read the article

  • Why am I getting a return value of zero from my position computation function?

    - by Hussain Murtaza
    Ok I have a Function int x(), which is used in new Rectangle(x(),a,a,a); in DrawMethod in XNA but when I use it I get x() = 0 as as the answer.Here is my CODE: int x() { int px = (128 * 5); int xx = 0; for (int i = 0; i < 6; i++) { if (Mouse.GetState().X > px) { //xx = Mouse.GetState().X; xx = px; break; } else { px -= 128; } } return xx; } Here is the DrawMethod Code: if (set) { spriteBatch.Draw(texture, new Rectangle(x(), y(), texture.Width, texture.Height), Color.White); textpositionX = x(); textpositionY = y(); set = false; select = false; place = true; } else if(select) { spriteBatch.Draw(texture, new Rectangle(Mouse.GetState().X - texture.Width / 2, Mouse.GetState().Y-texture.Height / 2, texture.Width, texture.Height), Color.White); } else if (place) { spriteBatch.Draw(texture, new Rectangle(textpositionX, textpositionY, texture.Width, texture.Height), Color.White); select = false; set = false; }

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • Why is my model's scale changing after rotating it?

    - by justnS
    I have just started a simple flight simulator and have implemented Roll and pitch. In the beginning, testing went very well; however, after about 15-20 seconds of constantly moving the thumbsticks in a random or circular motion, my model's scale begins to grow. At first I thought the model was moving closer to the camera, but i set break points when it was happening and can confirm the translation of my orientation matrix remains 0,0,0. Is this a result of Gimbal Lock? Does anyone see an obvious error in my code below? public override void Draw( Matrix view, Matrix projection ) { Matrix[] transforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo( transforms ); Matrix translateMatrix = Matrix.Identity * Matrix.CreateFromAxisAngle( _orientation.Right, MathHelper.ToRadians( pitch ) ) * Matrix.CreateFromAxisAngle( _orientation.Down, MathHelper.ToRadians( roll ) ); _orientation *= translateMatrix; foreach ( ModelMesh mesh in Model.Meshes ) { foreach ( BasicEffect effect in mesh.Effects ) { effect.World = _orientation * transforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public void Update( GamePadState gpState ) { roll = 5 * gpState.ThumbSticks.Left.X; pitch = 5 * gpState.ThumbSticks.Left.Y; }

    Read the article

  • Move the location of the XYZ pivot point on a mesh in UDK

    - by WebDevHobo
    When working with any mesh, you get an XYZ point somewhere on it. If you just want to move the mesh in any direction, it doesn't matter where this point is located. However, I want to rotate a door. This requires the point of rotation to be very specific. I can't find anywhere how to change the location of the point. Can anyone help? EDIT: solved, to change the pivot point, right click on the mesh, go to "Pivot" and move it. Then right click again and this time select "Save PrePivot to Pivot"

    Read the article

  • How to attach an object to a rotating circle?

    - by armands
    I am trying to make an object get attached on a collision point to a circle that is rotating, but the player needs to get attached with a constant point on the player. For example the player is moving back and forth and when the user touches the screen and the player jumps up but what I need is that when the player collides with the circle it attaches it's legs to it and continues rotating with the circle. So I wanted to know how to make this kind of collision joint in Cocos2d Box2d?

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • LWJGL - Mixing 2D and 3D

    - by nathan
    I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() { glEnable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 0.0f, 1.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); } protected static void make3D() { glDisable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); // Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN_WIDTH / (float) SCREEN_HEIGHT), 0.1f, 100.0f); // Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL_MODELVIEW); glLoadIdentity(); } The in my rendering code i would do something like: make2D(); //draw 2D stuffs here make3D(); //draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader = new TextureLoader(); Sprite s = new Sprite(loader, "player.png") And how i render it: make2D(); s.draw(0, 0); It works great. Here is how i render my quad: glTranslatef(0.0f, 0.0f, 30.0f); glScalef(12.0f, 9.0f, 1.0f); DrawUtils.drawQuad(); Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading/rendering the 2D image, rendering the quad. When i try to load my 2D image with the following: s = new Sprite(loader, "player.png); My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class: glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL_UNSIGNED_BYTE, textureBuffer); Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • cocos2d-x simple shader usage [on hold]

    - by Narek
    I want to obtain color ramp effect from this tutorial: http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x Here is my code in cocos2d-x 3: bool HelloWorld::init() { ////////////////////////////// // 1. super init first if ( !Layer::init() ) { return false; } Vec2 origin = Director::getInstance()->getVisibleOrigin(); sprite = Sprite::create("HelloWorld.png"); sprite->setAnchorPoint(Vec2(0, 0)); sprite->setRotation(3); sprite->setPosition(origin); addChild(sprite); std::string str = FileUtils::getInstance()->getStringFromFile("CSEColorRamp.fsh"); const GLchar * fragmentSource = str.c_str(); GLProgram* p = GLProgram::createWithByteArrays(ccPositionTextureA8Color_vert, fragmentSource); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD); p->link(); p->updateUniforms(); sprite->setGLProgram(p); // 3 colorRampUniformLocation = glGetUniformLocation(sprite->getGLProgram()->getProgram(), "u_colorRampTexture"); glUniform1i(colorRampUniformLocation, 1); // 4 colorRampTexture = Director::getInstance()->getTextureCache()->addImage("colorRamp.png"); colorRampTexture->setAliasTexParameters(); // 5 sprite->getGLProgram()->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, colorRampTexture->getName()); glActiveTexture(GL_TEXTURE0); return true; } And here is the fragment shader as it is in the tutorial: #ifdef GL_ES precision mediump float; #endif // 1 varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_colorRampTexture; void main() { // 2 vec3 normalColor = texture2D(u_texture, v_texCoord).rgb; // 3 float rampedR = texture2D(u_colorRampTexture, vec2(normalColor.r, 0)).r; float rampedG = texture2D(u_colorRampTexture, vec2(normalColor.g, 0)).g; float rampedB = texture2D(u_colorRampTexture, vec2(normalColor.b, 0)).b; // 4 gl_FragColor = vec4(rampedR, rampedG, rampedB, 1); } As a result I get a black screen with 2 draw calls. What is wrong? Do I miss something?

    Read the article

  • Need to make animation whereby the character shatters into a bunch of pieces

    - by theprojectabot
    I would like to take a 3d character model, cut out a bunch of shapes (or a bunch of triangles in the shape of the pieces I want) and then have the pieces separate from each other at the beginning of the animation and fall apart with gravity so it looks like the model is falling apart in shattered pieces. Is there a way to run a script on a mesh, cut out these pieces, instantiate all of them as separate models and then run gravity on them during the simulation?

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • Beginner question about vertex arrays in OpenGL

    - by MrDatabase
    Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this: glBindTexture(GL_TEXTURE_2D, texName); glVertexPointer(2, GL_FLOAT, 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, coordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to: glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL_TRIANGLES to glDrawArrays instead of GL_TRIANGLE_STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here :-) Cheers!

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >