Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 57/151 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • Cube chunk via list ToArray()

    - by Christian Frantz
    I've created a list of vertices that I call for each cube made in my array "cubes". When each cube is create, SetUpVertices is called which is a method that stores the 8 vertices of my cube. At the end of my list creation, I create a vertex buffer, and set the data of the list that contains vertices of all 25 cubes to that vertex buffer, effectively creating a "chunk" of cubes. The problem is that Invalid Operation Exception "The array is not the correct size for the amount of data requested." at the line vertices.ToArray(). I don't have an array for this, as the amount of cubes will be changing and arrays aren't dynamic. What could be the cause of this? for (int x = 0; x < 5; x++) { for (int z = 0; z < 5; z++) { SetUpVertices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z], z), color)); } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionColor), 8, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices.ToArray());

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • How do i use latest Pulseaudio in 11.10?

    - by YumYumYum
    Ubuntu 11.04 i had pulseaudio from source compiled and i used it to learn, it always worked (git versions). But since i have Ubuntu 11.10, i can install it but i can not use it anymore like i do in 11.04 before. Everytime i play something its throwing this: $ speaker-test speaker-test 1.0.24.2 Playback device is default Stream parameters are 48000Hz, S16_LE, 1 channels Using 16 octaves of pink noise Rate set to 48000Hz (requested 48000Hz) Buffer size range from 192 to 2097152 Period size range from 64 to 699051 Using max buffer size 2097152 Periods = 4 ALSA lib pcm_pulse.c:746:(pulse_prepare) PulseAudio: Unable to create stream: Invalid argument Unable to set hw params for playback: Input/output error Setting of hwparams failed: Input/output error How to make pulseaudio work in 11.10 from source?

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Low Graphics and laggy screen after update to 13.10 from 13.04

    - by Wh0RU
    After updating from 13.04 to 13.10 many of 3D unity stuff has stopped working, I am suing Samsung Series 3 (NP350V5X) Laptop which has Switchable Intel and AMD Radeon HD 7670M GFX. xserver-xorg-video-ati does works but NO 3D support and graphics are very low. [I am currently using this] fglrx & fglrx-updates shows blank screen after Login. Intel Graphic Install doesn't work either (dependency error) Output of $ sudo lshw -c video *-display description: VGA compatible controller product: Thames [Radeon HD 7500M/7600M Series] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:45 memory:e0000000-efffffff memory:c0120000-c013ffff ioport:3000(size=256) memory:c0100000-c011ffff *-display description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:bfc00000-bfffffff memory:d0000000-dfffffff ioport:4000(size=64) Similarly $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.3, 256 bits) OpenGL version string: 2.1 Mesa 9.2.1 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Shows no 3D support. Can anyone please guide how to make things working again.

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Colored Collision Detection

    - by tugrul büyükisik
    Several years ago, i made a fast collision detection for 2D, it was just checking a bullets front-pixel's color to check if it were to hit something. Lets say the target rgb color is (124,200,255) then it just checks for that color. After the collision detection, it paints the target with appropriate picture. So, collision detection is made in background without drawing but then painted and drawed. How can i do this in 3D? Because, a vertex is not just exist like a 2D picture's pixel. I looked at some java3D and other programs and understood that 3D world is made of objects. Not just pictures. Is there a program that truly fills the world with vertices ? But it could be needing terabytes of ram even more. Do you know an easy way to interpolate the color of a vertex in java3D or similar program? Note: for a rgb color-identifier, i can make 255*255*255 different 2D objects in background.

    Read the article

  • Unity on Ubuntu 11.10 - The Dash Home button brings up the panel, but is empty

    - by David M. Coe
    The dash home button brings up a panel that is greyed out, but it is totally empty. It seems to be the very same issue as this: Dash home button brings up blank window which is unanswered. /usr/lib/nux/unity_support_test -p returns OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RV370 OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I've tried a unity --reset but that doesn't seem to work. Unity seems to reset, but I get the following warning over and over: cs space validation failed unity What should I do next to try and fix this? Edit: Attempted fixes: I've refomatted, did not work. I've done apt-get remove unity then apt-get update then apt-get install unity, did not work. I've switched to Unity 2d and this seems to work. How can I get regualar Unity working or atleast find the error?

    Read the article

  • Multiple vulnerabilities in Wireshark

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-4285 Numeric Errors vulnerability 3.3 Wireshark Solaris 11 11/11 SRU 13.4 CVE-2012-4286 Numeric Errors vulnerability 4.3 CVE-2012-4287 Resource Management Errors vulnerability 5.0 CVE-2012-4288 Numeric Errors vulnerability 3.3 CVE-2012-4289 Resource Management Errors vulnerability 3.3 CVE-2012-4290 Resource Management Errors vulnerability 3.3 CVE-2012-4291 Resource Management Errors vulnerability 3.3 CVE-2012-4292 Improper Input Validation vulnerability 3.3 CVE-2012-4293 Numeric Errors vulnerability 3.3 CVE-2012-4294 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 5.8 CVE-2012-4295 Denial of Service (DoS) vulnerability 3.3 CVE-2012-4296 Resource Management Errors vulnerability 3.3 CVE-2012-4297 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 8.3 CVE-2012-4298 Numeric Errors vulnerability 5.4 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • NVidia control panel SSAO not working

    - by János Turánszki
    I am just before implementing screen space ambient occlusion in my game, but first I wanted to try enabling it from NVidia control panel only to find out that it is greyed out so that I can not enable it. With this I could enable SSAO for some other games, but not every one. I know this technique requires the depth buffer and (optionally) a normal map texture to sample information from which I already have access to given I have a deferred renderer working. After that I actually thought to roll back to a previous version of my game which still uses forward rendering so the depth buffer is actually bound to the backbuffer which I render to from the get-go so that maybe the NVidia control panel would somehow make use of it. It was not working with forward rendering either. (I also tried FXAA in the control panel and that works - but it doesn't need any depth or normal texture) So my question is that how can I enable this function so that it would work by enabling it in the NVidia control panel?

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Multiple vulnerabilities in ImageMagick

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2004-0981 Buffer overflow vulnerability 10.0 ImageMagick Solaris 10 SPARC: 136882-03 X86: 136883-03 CVE-2005-0397 Format string vulnerability 7.5 CVE-2005-0759 Denial of service (DoS) vulnerability 5.0 CVE-2005-0760 Denial of service (DoS) vulnerability 5.0 CVE-2005-0761 Denial of service (DoS) vulnerability 5.0 CVE-2005-0762 Buffer overflow vulnerability 7.5 CVE-2005-1739 Denial of service (DoS) vulnerability 5.0 CVE-2007-4985 Denial of service (DoS) vulnerability 4.3 CVE-2007-4986 Numeric Errors vulnerability 6.8 CVE-2007-4987 Numeric Errors vulnerability 9.3 CVE-2007-4988 Numeric Errors vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

  • Graphics hardware warning when updating to 14.04

    - by pacomet
    As I use Ubuntu at work I just update only LTS versions but now I'm not sure if I can/should. As my computer is now ten years old I would change if was mine but as it is owned by my employer I have to work with it. It's not a bad one, it runs fine (this was not true when still had Windows on it ;-). When updating to 14.04, it warns about possible bad/slow performance with Unity 3D so I stop updating as I am at work, not my own computer. As I understand from http://askubuntu.com/a/438958/25305 Nvidia Geforce FX 5500 graphics card is still supported in 14.04. Now, in 12.04, I have driver version 173 and unity 2d runs fine for me. output of /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.39 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Should I update? Is it better to stay with 12.04? Thanks

    Read the article

  • Use ctrl+space to invoke clang_complete

    - by tsurko
    I've setup a simple vim environment for C++ development and I use clang_complete for code completion. I'm wondering if there is a way to invoke clang_complete with ctrl+space (as in Eclipse for example)? Currently it is invoked with C-X C-U, which is not very convenient. In the plugin code I saw this: inoremap <expr> <buffer> <C-X><C-U> <SID>LaunchCompletion() So I tried something like this in my vimrc: inoremap <expr> <buffer> <C-Space> <SID>LaunchCompletion() Of course it didn't work:) I read vim's doc about key mapping. but no good. Have you got any suggestions what I'm doing wrong?

    Read the article

  • How can I support scrolling when using batched rendering for my tiles?

    - by dardanel
    I have tiled map 100*75 and tiles are 32*32 pixel.I want to use batching for performance .I don't figure it out , because of my game needs scrolling and every frame i draw 22*16 tiles (my screen is 20*16 tile) .I thought that batching tiles for every frame .Is it good or any suggestion? edit :to more clarify I want to use occlusion culling and batching at the same time.I thought that drawing only visible areas and batching them together .But there is a something i couldn't figure out .When scrolling screen with translate matrix , if one row become invisible , I bind new row and batch them again.Every batched objects needs to buffer again.So I batch tiles and buffer to VBO every time when one row become invisible .I don't know these way is efficient or not .This is my question .And i am open to any suggestions.

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Multiple vulnerabilities in Firefox web browser

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-3062 Numeric Errors vulnerability 6.8 Firefox web browser Solaris 11 11/11 SRU 9.5 Solaris 10 SPARC: 145080-11 X86: 145081-10 CVE-2012-0467 Denial of service (DoS) vulnerability 10.0 CVE-2012-0468 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0469 Resource Management Errors vulnerability 10.0 CVE-2012-0470 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0471 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0473 Numeric Errors vulnerability 5.0 CVE-2012-0474 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0477 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0478 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-0479 Identity spoofing vulnerability 4.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • Pixel Shader Issues :

    - by Morphex
    I have issues with a pixel shader, my issue is mostly that I get nothing draw on the screen. float4x4 MVP; // TODO: add effect parameters here. struct VertexShaderInput { float4 Position : POSITION; float4 normal : NORMAL; float2 TEXCOORD : TEXCOORD; }; struct VertexShaderOutput { float4 Position : POSITION; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { input.Position.w = 0; VertexShaderOutput output; output.Position = mul(input.Position, MVP); // TODO: add your vertex shader code here. return output; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_TARGET { return float4(1, 0, 0, 1); } technique { pass { Profile = 11.0; VertexShader = VertexShaderFunction; PixelShader = PixelShaderFunction; } } My matrix is calculated like this : Matrix MVP = Matrix.Multiply(Matrix.Multiply(Matrix.Identity, Matrix.LookAtLH(new Vector3(-10, 10, -10), new Vector3(0), new Vector3(0, 1, -0))), Camera.Projection); VoxelEffect.Parameters["MVP"].SetValue(MVP); Visual Studio Graphics Debug shows me that my vertex shader is actually working, but not the PixelShader. I striped the Shader to the bare minimums so that I was sure the shader was correct. But why is my screen still black?

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >