Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 17/151 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • ORA-22835 using JPA (Buffer too small)

    - by Kenneth
    I am trying to persist an Entity with a @Lob annotated String field. The content of that fiels if bigger than the 40k buffer size limit. The first problem I had was related to the setString method used internally by the JPA implementation (Hibernate in my case) and the Oracle JDBC Driver. This problem was solved adding <property name="hibernate.connection.SetBigStringTryClob" value="true"/> to my persistence.xml file. Then, the error changed to a ORA-22835 error (the buffer is too small). ¿Is there any way that JPA solves this problem without going to a low-level implementation? ¿Any suggestions?

    Read the article

  • OpenGL Motion blur with the accumulation buffer in WxWidgets

    - by Klaus
    Hello, I'm trying to achieve a motion blur effect in my OpenGL application. I read somewhere this solution, using the accumulation buffer: glAccum(GL_MULT, 0.90); glAccum(GL_ACCUM, 0.10); glAccum(GL_RETURN, 1.0); glFlush(); at the end of the render loop. But nothing happens... What am I missing ? Additions after genpfault answer: Indeed I did not asked for an accumulation buffer when I initialized my context. So I tried to pass an array of attributes to the constructor of my wxGLCanvas, as described here: http://docs.wxwidgets.org/2.6/wx_wxglcanvas.html : int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , WX_GL_MIN_ACCUM_RED, WX_GL_MIN_ACCUM_GREEN, WX_GL_MIN_ACCUM_BLUE, 0} But all I get is a friendly Seg fault. Does someone understand how to use this ? (no problems with int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , 0})

    Read the article

  • What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

    - by Michael
    Preface First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect. The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA". If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message: Invalid effect file. Unable to find vertex shader in pass "P0" Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing. I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right. The Real Question So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing? And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game? Can any shader pros help? Thanks!

    Read the article

  • Qt 5.3 OpenGL - vertex buffer object drawing using the core profile

    - by user3700881
    Im using Qt 5.3 to create a QWindow to do some basic rendering stuff. The QWindow is declared like this: class OpenGLWindow : public QWindow, protected QOpenGLFunctions_3_3_Core { Q_OBJECT ... } It is initialized in the constructor: OpenGLWindow::OpenGLWindow(QWindow *parent) : QWindow(parent) { QSurfaceFormat format; format.setVersion(3,3); format.setProfile(QSurfaceFormat::CoreProfile); this->setSurfaceType(OpenGLSurface); this->setFormat(format); this->create(); _context = new QOpenGLContext; _context->setFormat(format); _context->create(); _context->makeCurrent(this); this->initializeOpenGLFunctions(); ... } And that's the rendering code: void OpenGLWindow::render() { if(!isExposed()) return; _context->makeCurrent(this); glClear(GL_COLOR_BUFFER_BIT); glUseProgram(_shaderProgram); glBindBuffer(GL_ARRAY_BUFFER, _positionBufferObject); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glUseProgram(0); _context->swapBuffers(this); } I am trying to draw a simple triangle using a vertex and fragment shader. The problem is that the triangle is not showing up when the core profile is set. Only when I set the OpenGL version to 2.0 or when I use the compatibility profile, it shows up. From my point of view that doesn't make any sense because I am not using fixed functionality at all. What am I missing?

    Read the article

  • how to flush the console buffer?

    - by DoronS
    Hi all, i have some code that run repetedly : printf("do you want to continue? Y/N: \n"); keepplaying = getchar(); in the next my code is running it doesnt wait for input. i found out that getchar in the seconed time use '\n' as the charcter. im gussing this is due to some buffer the sdio has, so it save the last input which was "Y\n" or "N\n". my Q is, how do i flush the buffer before using the getchar, which will make getchar wait for my answer?

    Read the article

  • evaluating buffer in emacs python-mode on remote host

    - by Adrian
    Hello, I'm using emacs23 with tramp to modify python scripts on a remote host. I found that when I start the python shell within emacs it starts up python on the remote host. My problem is that when I then try to call python-send-buffer via C-c C-c it comes up with the error Traceback (most recent call last): File "", line 1, in ? ImportError: No module named emacs Traceback (most recent call last): File "", line 1, in ? NameError: name 'emacs' is not defined Now, I must admit that I don't really know what's going on here. Is there a way for me to configure emacs so that I can evaluate the buffer on the remote host? Many thanks.

    Read the article

  • snipMate only working on empty buffer?

    - by JesseBuesking
    I'm attempting to use snipMate with sql files, however it doesn't seem to work when editing an existing file. If I create a new empty buffer (no file; e.g. launch gvim from the start menu), and set the filetype to sql (:set ft=sql), it works. However, if I then try to open a sql file (e.g. :e c:\blah.sql) and edit it, snipMate no longer works. What gives!? Setup: gvim vim 7.3 Windows 7 snipMate 0.84 Also, I do in fact have filetype plugin on in my .vimrc file. edit Apparently if I open an empty buffer, set the filetype to sql, then save to file using w c:\blah.sql, I now have a sql file open AND snipMate continues to work. edit Here's a gist of my current .vimrc in case it helps: https://gist.github.com/3946877

    Read the article

  • buffer overflow with boost::program_options

    - by f4
    Hello, I have a problem using boost:program_options this simple program, copy-pasted from boosts' documentation : #include <boost/program_options.hpp> int main( int argc, char** argv ) { namespace po = boost::program_options; po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; return 0; } fails with a buffer overflow. I have activated the "buffer security switch", and when I run it I get an "unknown exception (0xc0000409)" when I step over the line desc.add_options()... I use Visual Studio 2005 and boost 1.43.0. By the way it does run if I deactivate the switch but I don't feel comfortable doing so... unless it's possible to deactivate it locally. So do you have a solution to this problem? EDIT I found the problem I was linking against libboost_program_options-vc80-mt.lib which wasn't the good library.

    Read the article

  • How to signal a buffer full state between posix threads

    - by mikip
    Hi I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process. When thread B has finished it should inform thread A that it has finished. How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take. Thanks

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Simple Emacs keybindings

    - by User1
    I have two operations that I do all the time in Emacs: Create a new buffer and paste the clipboard. [C-S-n] Close the current buffer. [C-S-w] Switch to the last viewed buffer [C-TAB] I feel like a keyboard acrobat when doing the first two operations. I think it would be worth trying some custom keybindings and macros. A few questions about this customization: How would I make a macro for #1? Are these good keybindings (i know this is a bit subjective, but they might be used by something popular that I don't use) Has anyone found a Ctrl-Tab macro that will act like Alt-Tab in Linux/Windows? Specifically, I want it have a stack of buffers according to the last viewed timestamp (most recent on top). I want to continue cycling through the stack until I let go of the ctrl key. When the ctrl key is released, I want the current buffer to get an updated position on the stack.

    Read the article

  • Why don't I need to bind my vertex buffer object before calling glDrawArrays?

    - by valmo
    I'm a bit confused why this still renders. I thought you need to bind a vertex buffer object so that glDrawArrays knows which vertex buffer to use. Here is my initialisation code.. // Create and bind vertex array to store vertex attribute states. glGenVertexArraysOES(NUM_VERTEX_ARRAYS, &m_vertexArray); glBindVertexArrayOES(m_vertexArray); // Create and bind vertex buffer to store vertex data. glGenBuffers(NUM_VERTEX_BUFFERS, &m_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 36, &m_vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(VertexAttribPosition); glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0)); glEnableVertexAttribArray(VertexAttribNormal); glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12)); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArrayOES(0); Here is my render code. I'm confused why glDrawArrays still works when I bind 0 to GL_ARRAY_BUFFER. glBindVertexArrayOES(m_vertexArray); glBindBuffer(GL_ARRAY_BUFFER, 0); glDrawArrays(GL_TRIANGLES, 0, 36); glBindVertexArrayOES(0);

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • Assigning unsigned char* buffer to a string

    - by CPPChase
    This question might be asked before but I couldn't find exactly what I need. My problem is, I have a buffer loaded by data downloaded from a webservice. The buffer is in unsigned char* form in which there is no '\0' at the end. Then I have a poco xml parser needs a string. I tried assigning it to string but now I realized it would cause problem such as leaking. here is the code: DOMParser::DOMParser(unsigned char* consatData, int consatDataSize, unsigned char* lagData, int lagDataSize) { Poco::XML::DOMParser parser; std::string consat; consat.assign((const char*) consatData, consatDataSize); pDoc = parser.parseString(consat); ParseConsat(); } Poco xml parser does have a ParseMemory which need a const char* and size of data but for some reason it just gives me segmentation fault. So I think it's safer to turn it to string. Thanks in advance.

    Read the article

  • Zsh: how to see all buffers?

    - by HH
    You can push things to buffer with ^Q and pop them with ESC-g. Alt+x vi-set-buffer changes buffer somehow. How can I see all the buffers? They are probably some files to look at.

    Read the article

  • Tie destruction of an object (sealed) to destruction of an unmanaged buffer

    - by testtestSO
    I'll explain my situation first: I'm interested of using the Bitmap constructor that takes scan0, stride and format, because I'm decoding tiled images and I'd like to choose my own stride so I can decode the tiles without caring about the bounds in the decoder part. Anyway, the problem is that the documentation says: The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter. However, the memory should not be released until the related Bitmap is released. I can't release the buffer easily, because the Bitmap is then passed to another class that will eventually destroy it and I don't have control over it. Is there some way (hacky, I know) to tell the GC to also release my buffer when the Bitmap is destroyed? (Also, any alternative solution is welcome).

    Read the article

  • OpenGL 3.0+ framebuffer to texture/images

    - by user827992
    I need a way to capture what is rendered on screen, i have read about glReadPixels but it looks really slow. Can you suggest a more efficient or just an alternative way for just copying what is rendered by OpenGL 3.0+ to the local RAM and in general to output this in a image or in a data stream? How i can achieve the same goal with OpenGL ES 2.0 ? EDIT: i just forgot: with this OpenGL functions how i can be sure that I'm actually reading a complete frame, meaning that there is no overlapping between 2 frames or any nasty side effect I'm actually reading the frame that comes right next to the previous one so i do not lose frames

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • When does depth testing happen?

    - by Utkarsh Sinha
    I'm working with 2D sprites - and I want to do 3D style depth testing with them. When writing a pixel shader for them, I get access to the semantic DEPTH0. Would writing to this value help? It seems it doesn't. Maybe it's done before the pixel shader step? Or is depth testing only done when drawing 3D things (I'm using SpriteBatch)? Any links/articles/topics to read/search for would be appreciated.

    Read the article

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • using heightmap to simulate 3d in an isometric 2d game

    - by VaTTeRGeR
    I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed: How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?

    Read the article

  • How do i use GraphMLReader2 in Jung?

    - by askus
    I want to use class GraphMLReader to read a Undirected Graph from graphML with JUNG2.0. The code is as follow: import edu.uci.ics.jung.io.*; import edu.uci.ics.jung.io.graphml.*; import java.io.*; import java.util.*; import org.apache.commons.collections15.Transformer; import edu.uci.ics.jung.graph.*; class Vertex{ int id; String type; String value; } class Edge{ int id ; String type; String value; } public class Loader{ static String src = "test.xsl"; public static void Main( String[] args){ Reader reader = new FileReader(src ); Transformer<NodeMetadata, Vertex> vtrans = new Transformer<NodeMetadata,Vertex>(){ public Vertex transform(NodeMetadata nmd ){ Vertex v = new Vertex() ; v.type = nmd.getProperty("type"); v.value = nmd.getProperty("value"); v.id = Integer.valueOf( nmd.getId() ); return v; } }; Transformer<EdgeMetadata, Edge> etrans = new Transformer<EdgeMetadata,Edge>(){ public Edge transform( EdgeMetadata emd ){ Edge e = new Edge() ; e.type = emd.getProperty("type"); e.value = emd.getProperty("value"); e.id = Integer.valueOf( emd.getId() ); return e; } }; Transformer<HyperEdgeMetadata, Edge> hetrans = new Transformer<HyperEdgeMetadata,Edge>(){ public Edge transform( HyperEdgeMetadata emd ){ Edge e = new Edge() ; e.type = emd.getProperty("type"); e.value = emd.getProperty("value"); e.id = Integer.valueOf( emd.getId() ); return e; } }; Transformer< GraphMetadata , UndirectedSparseGraph> gtrans = new Transformer<GraphMetadata,UndirectedSparseGraph>(){ public UndirectedSparseGraph<Vertex,Edge> transform( GraphMetadata gmd ){ return new UndirectedSparseGraph<Vertex,Edge>(); } }; GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> , Vertex , Edge> gmlr = new GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> ,Vertex, Edge>( reader, gtrans, vtrans, etrans, hetrans); UndirectedSparseGraph<Vertex,Edge> g = gmlr.readGraph(); return ; } } However, compiler alert that: Loader.java:60: cannot find symbol symbol : constructor GraphMLReader2(java.io.Reader,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.GraphMetadata,edu.uci.ics.jung.graph.UndirectedSparseGraph>,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.NodeMetadata,Vertex>,org.apache.commons.collections15.Transformer<edu.uci.ics.jung.io.graphml.EdgeMetadata,Edge>) location: class edu.uci.ics.jung.io.graphml.GraphMLReader2<edu.uci.ics.jung.graph.UndirectedSparseGraph<Vertex,Edge>,Vertex,Edge> new GraphMLReader2< UndirectedSparseGraph<Vertex,Edge> ,Vertex, Edge>( ^ 1 error How can i solve this problem? Thanks.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >