Search Results

Search found 3366 results on 135 pages for 'deferred rendering'.

Page 65/135 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Data structures for storing finger/stylus movements in drawing application?

    - by mattja?øb
    I have a general question about creating a drawing application, the language could be C++ or ObjectiveC with OpenGL. I would like to hear what are the best methods and practices for storing strokes data. Think of the many iPad apps that allow you to draw with your finger (or a stylus) or any other similar function on a desktop app. To summarize, the data structure must: be highly responsive to the movement store precise values (close in space / time) usable for rendering the strokes with complex textures (textures based on the dynamic of the stroke etc) exportable to a text file for saving/loading

    Read the article

  • Aspose.Words 9.0.0 Released! A word processing component for .NET applications

    What is new in this release?  The long awaited version of Aspose.Words for .NET 9.0.0 has been released. This new release of Aspose.Words includes plenty of new and remarkable features like updated/rebuilt a table of contents, handling embedded OLE objects, ISO 29500 Transitional support,  Footnotes rendering, EPUB embedding and many more.   The list of new and improved features in this release are listed below - Table of Contents (TOC) fields are now updated/rebuilt....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • Attributes and Behaviours in game object design

    - by Brukwa
    Recently I have read interesting slides about game object design written by Marcin Chady Theory and Practice of the Game Object Component Architecture. I have prototyped quick sample that utilize all Attributes\Behaviour idea with some sample data. Now I have faced a little problem when I added a RenderingSystem to my prototype application. I have created an object with RenderBehaviour which listens for messages (OnMessage function) like MovedObject in order to mark them as invalid and in OnUpdate pass I am inserting a new renderable object to rederer queue. I have noticed that rendering updates should be the last thing made in single frame and this causes RenderBehaviour to depend on any other Behaviour that changes object position (i.ex. PhysicsSystem and PhysicsBehaviour). I am not even sure if I am doing this the way it should be. Do you have any clues that might put me on the right track?

    Read the article

  • debian (stable), google chrome, sencha and tranparency [closed]

    - by Kim Alders
    I am a bit unsure where to post this, but i guess it should be here, as it has to do with rendering in a browser. We have a sencha touch 1.1 application running in Google Chrome. On mij own computer (ubuntu) there are no problems. On the touch screen attached to a computer running debian-OS, transparency is shown as semi-transparent. It happens on sliders. Sliders have a button on them. This button is rendered bij sencha with CSS. It is NOT an image. Anyone has any idea how to get it 'nice' on debian too

    Read the article

  • MVC? patterns for game development? [closed]

    - by davivid
    Possible Duplicate: MVC-like compartmentalization in games? I am thinking of the best way to structure my project and was thought a MVC style pattern would be appropriate. Would be correct having the model handle the majority and basically being the game engine? Are there any standardised patterns recommended for simple game development? Model / Game Engine Data: Level Design, Chat feeds, etc Game Status: Player status, Enemy status, World Status etc etc. Engine: Physics, Collisions, AI View 3D: Gameplay, Camera, Rendering... 2D: UI etc Controller: Player Input UI Input

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • Many ui panels needs interaction with same object

    - by user877329
    I am developing a tool for simulating systems like the Gray-Scott model (That is systems where spatial distribution depends on time). The actual model is loaded from a DLL or shared object and the simulation is performed by a Simulation object. There are at least two situations when the simulation needs to be destroyed: The user loads a new model The user changes the size of the domain To make sure nothing goes wrong, the current Model, Simulation, and rendering Thread are all managed by an ApplicationState object. But the two cases above are initiated from two different UI objects. Is it then ok to distribute a reference to the ApplicationState object to all panels that need to access at least one method on the ApplicationState object? Another solution would be to use aggregation so that the panel from which the user chooses model knows the simulation parameter panel. Also, the ApplicationState class seems somewhat clumsy, so I would like to have something else

    Read the article

  • Multiple weapons for android game

    - by Z3r0
    I am trying to make a 3D game for android using the Rajawali engine to render the 3D graphics and blender for designing my models(exporting as .md2), and I want my character to be able to change weapons, armor, helm, etc. Rendering every possible animation would be too much: if I had 10 different weapons, 10 armor and 10 helm, I would have to create 1000 animations with every possible equipment and if I add boots to list it would be even worse. I read somewhere you can use bones for this; but in Android, I only get the object itself to work with. Does anyone has an idea how i can solve this? If I make the weapon a different object how do I parent it to my models in my game?

    Read the article

  • Can emsripten compile down to Canvas-based Js instead of WebGL?

    - by Sebastian Scholle
    I understand that emscripten compiles down LLVM to JS and it converts OpenGL Calls to WebGL. Thats a fairly simple translation. Is there a way to tell emscripten to use some other graphics Library ( for example Pixi JS ) for its rendering code translations? Is the compiled JS code easy to update or would it be better to merge in your own Graphics API that handles WebGL/Canvas calls. IE: can we use a C++ Graphics Wrapper Library that when compiles to JS, will simply plug into our own JS Graphics Wrapper Library? Im assuming YES, but has anyone tried this? And if So, what would be your technique, as my C++ skills are basic.

    Read the article

  • Is it possible to use a spherical collision component in UDK?

    - by Almo
    I have an object in UDK, which has a SkeletalMesh. At certain times in the game, I want this object to continue rendering the SkeletalMesh, but I'd like it to use spherical collision temporarily. After reading a bunch about PrimitiveComponents, my understanding is that UDK supports cylindrical and box-like collision, but not spherical without using a static mesh. But it seems an attached static mesh will render, since it has no bHidden attribute. There must be a way to do this, but I don't know UDK well enough yet to understand all the pitfalls.

    Read the article

  • Can Layer Masks Achieve This Effect

    - by Julian
    If you look at the image below you will see the player surrounded by a dotted yellow box. The dotted yellow box is also part of the player and represented a portion of the player being masked from both rendering and affected by physics. My question is if layer masks in Unity can achieve the following effect. -In Area 1, the red box/animations of the player are visible and the rigidbody of this shape is affected by all Physics. -Any portion of the player that enters Area 2 makes the larger yellow box within the area become visible (and affected by physics) and vice versa for any portion of the smaller red box that enters. -This can persist when both entering and leaving either area from any direction. Thank you for any help!

    Read the article

  • How do I build a matrix to translate one set of points to another?

    - by dotminic
    I've got 3 points in space that define a triangle. I've also got a vertex buffer made up of three vertices, that also represent a triangle that I will refer to as a "model". How can I can I find the matrix M that will transform vertex in my buffer to those 3 points in space ? For example, let's say my three points A, B, C are at locations: A.x = 10, A.y = 16, A.z = 8 B.x = 12, B.y = 11, B.z = 1 C.x = 19, C.y = 12, C.z = 3 given these coordinates how can I build a matrix that will translate and rotate my model such that both triangles have the exact same world space ? That is, I want the first vertex in my triangle model to have the same coordinates as A, the second to have the same coordinates as B, and same goes for C. nb: I'm using instanced rendering so I can't just give each vertex the same position as my 3 points. I have a set of three points defining a triangle, and only three vertices in my vertex buffer.

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • Using PhysX, how can I predict where I will need to generate procedural terrain collision shapes?

    - by Sion Sheevok
    In this situation, I have terrain height values I generate procedurally. For rendering, I use the camera's position to generate an appropriate sized height map. For collision, however, I need to have height fields generated in areas where objects may intersect. My current potential solution, which may be naive, is to iterate over all "awake" physics actors, use their bounds/extents and velocities to generate spheres in which they may reside after a physics update, then generate height values for ranges encompassing clustered groups of actors. Much of that data is likely already calculated by PhysX already, however. Is there some API, maybe a set of queries, even callbacks from the spatial system, that I could use to predict where terrain height values will be needed?

    Read the article

  • Depth Map resolution shifting

    - by user3669538
    the problem is with shadow mapping as you can see, actually it works fine but in a certain condition that the Depth Map size must be equal to the size of rendering buffer, I use an infinite directional light so if the window is 800x600 the depth map must be 800x600, and when i change the size of the shadow map to be 900x600 it starts to be shifted and when it's size be 1024x1024 it also shifts till it disappears the GLSL shadow function float calcShadow(sampler2D Dmap, vec4 coor){ vec4 sh = vec4((coor.xyz/coor.w),1); sh.z *= 0.9; return step(sh.z,texture2D(Dmap,sh.xy).r); } here's the result when it's the same size as the window Colored result & Depth Map and here's the shifted result, as you can notice the depth map is exactly as the previous one with the addition of white space to the right. Colored result http://goo.gl/5lYIFV Depth Map http://goo.gl/7320Dd

    Read the article

  • World orientation in OpenGLES clarification

    - by Dev2rights
    I have a 3d tile map made up of individual billboards in OpenGLES. Each is a 2 triangles mesh and has a 3D Vector to determine its position and another defining its rotation from the origin at (0,0,0). Im trying to work out how to rotate the entire tile map around a point be that the origin or some arbitrary point in space. Im guessing i need to set up a Model Matrix instead for each tile. Then set up a world matrix for the world. Then on updating i would translate the world matrix and change the orientation and multiply it with each model matrix before rendering. Is this correct ?

    Read the article

  • Aspose.Newsletter June 2010 Edition is out now

    Aspose Newsletter for June 2010 has now been published that highlights all the newly supported features offered in the recent releases of its JasperReports exporters, SQL Server rendering extensions, .NET, Java and SharePoint components. This months technical article demonstrates the steps needed to recognize a barcode from a Word document using Aspose.BarCode for .NET and Aspose.Words for .NET. Also several examples for migrating your code from InfoPath Forms Services to Aspose.Form for .NET...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Framework to implement an in game gui editor

    - by momboco
    I need to do an in game gui editor. The game engine has his own widgets elements and I don't want a gui library that substitute it. The most difficult task is the implementation of the functionality that makes usable to artists and designers. Positioning Resize Alignment between some elements Multiselection Relationship between children and parents Add guides Magnet to place elements quickly Use of layers Undo / Redo ... I'm searching a framework or something like, with these functionalities implemented. And a form of append my own engine to make use of it. It would be ideally a mixing between a tool like Photoshop and libRocket ( engine rendering independent ).

    Read the article

  • C++ unmanaged inside winform

    - by Gosso
    First: I am using C# and C++ on windows 7. I have created a basic rendering engine in c++ with directx 10. It works good as a stand alone application. But, when I sending the Form.Handle of a WinForm I want to render inside to the engine it crashes during D3D10CreateDeviceAndSwapChain with the following error: HRESULT: 0x887a0001 (2289696769) Name: DXGI_ERROR_INVALID_CALL I get the handle from the winform during loading of the form. unsafe { void *ptr=m_view.Handle.ToPointer(); uint v = (uint)ptr; lhandle = v.ToString(); };

    Read the article

  • OpenGL Beginner question

    - by nobby
    I'm new to OpenGL programming, but I can't find a good book to read or a tutorial, I've tried reading through the superbible or whatever its name is but it's kind of complicated to me. The tutorial at http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-2.3:-Rendering.html is pretty ok but it doesn't cover what I need mostly, which is opengl math etc (such as projection matrix, view matrix, and so on). I'm fairly OK at C(++) (3+ years experience, I don't know if you would call that "good") What i basically want to do with OpenGL is, make a simple game (prefer 2D as a start and not 3D). Please suggest a good EBook to read and learn from.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >