Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 114/151 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • label backgrond flicking on a WinForms user control has background image enabled

    - by slowlycooked
    I am working on a windows form project and having some problem with UserControl Double Buffering. I created a usercontrol and has a background image, then on top of it I have few radio buttons and labels. Radio buttons and labels are all having transparent background as color. However, when I show and hide the User control, I can see the flickering on those labels and radio buttons that has transparent background. And I tried Me.SetStyle(ControlStyles.DoubleBuffer _ Or ControlStyles.AllPaintingInWmPaint _ Or ControlStyles.UserPaint _ Or ControlStyles.SupportsTransparentBackColor, _ True) After initializeComponent() to enable double buffer on this user control, but it doesn’t seem to work.

    Read the article

  • OpenGL ES clarifying question regarding FBOs -- sorry can't find this info anywhere else?

    - by DevDevDev
    If I instantiate an FBO without binding a rendering buffer or a texture to it, what happens when I draw to it, nothing? Do I need to associate a rendering target (renderbuffer or texture) to have an FBO do anything? What I'm trying to do is precache some buffers and then merge them later, but that doesn't seem to work at all. Ideally I'd like to do something like glBindFramebufferOES(GL_FRAMEBUFFER_OES, fbo1); // Draw some stuff to fbo1 glBindFramebufferOES(GL_FRAMEBUFFER_OES, fbo2); // Draw some stuff to fbo2 // ... // ... // glRenderFbo(fbo1); -- Not a func // Set blending, etc. etc. // glRenderFbo(fbo2); -- Not a func

    Read the article

  • setsockopt TCP_NODELAY question on Windows Mobile

    - by weki
    Hi all, I have a problem on Windows Mobile 6.0. I would like to create a TCP connection which does not use the Nagle algorithm, so it sends my data when I call "send" function, and does not buffer calls, having too small amount of data. I tried the following: BOOL b = TRUE; setsockopt(socketfd, IPPROTO_TCP, TCP_NODELAY, (char*)(&b), sizeof(BOOL)); It works fine on desktop. But on Windows Mobile, if I set this value, than I make a query for it, the returned value is 8. And the network traffic analysis shows that the nothing changed. Is there any way to force a flush to my socket?

    Read the article

  • What's the best way to read/write array contents from/to binary files in C#?

    - by Eric
    I would like to read and write the contents of large ( 2GB), raw volume files (e.g. MRI scans). These files are just a sequence of e.g. 32 x 32 x 32 floats so they map well to 1D arrays. I would like to be able to read the contents of the binary volume files into 1D arrays of e.g. float or ushort (depending on the data type of the binary files) and similarly export the arrays back out to the raw volume files. What's the best way to do this with C#? Read/Write them 1 element at a time with BinaryReader/BinaryWriter? Read them piece-wise into byte arrays with FileStream.Read and then do a System.Buffer.BlockCopy between arrays (keeping in mind that I want 2GB arrays)? Write my own Reader/Writer?

    Read the article

  • Sharing output streams through a JNI interface

    - by Chris Conway
    I am writing a Java application that uses a C++ library through a JNI interface. The C++ library creates objects of type Foo, which are duly passed up through JNI to Java. Suppose the library has an output function void Foo::print(std::ostream &os) and I have a Java OutputStream out. How can I invoke Foo::print from Java so that the output appears on out? Is there any way to coerce the OutputStream to a std::ostream in the JNI layer? Can I capture the output in a buffer the JNI layer and then copy it into out?

    Read the article

  • Problem with Marshalling char* in c#

    - by chrmue
    I have a problem calling this function from a c++ DLL in c# INT32 WINAPI PM_COM_GetText(INT32 TextId, char* pBuf, INT32 BufSize); It writes a Text in a buffer for a given text id. I try to call it with the following c# code, but I constantly get an access violation and don't undrestand why: public string GetText(Int32 TextId) { Int32 BufSize = 256; StringBuilder Str = new StringBuilder(BufSize); PM_COM_GetText(TextId, Str, BufSize); return Str.ToString(); } [DllImport("ComDll.dll", CharSet = CharSet.Ansi)] private static extern Int32 PM_COM_GetText(Int32 TextId, StringBuilder Str, Int32 BufSize); I don't see what's wrong, it looks to me like many other code snippets I found in the web. Any ideas? Thanks in advance!

    Read the article

  • Why use multiple OpenGL context

    - by Luca
    For rendering I have a current GL context, associated to a window. In the case the application render multiple scenes (for example using accumulation or different viewports) I think it is ok to reuse the same context. My question, indeed, is: why should I use multiple GL context? I red on ARB_framebuffer_object extension spec that MakeCurrent call could be expansive, and in the case the ARB_framebuffer_object extension is present I can render on a generic buffer without using MakeCurrent. Apparently the only reason to use multiple GL context is to avoid to setup context state (pixel store, transfer, point size, polygon stipple...) or to have avaialable multiple render buffers configuration (one context with accumulation, another without). How to determine when is better an alternative context instead of setting context state? Thankyou all!

    Read the article

  • [C#] Onpaint events (invalidated) changing execution order after a period normal operation (runtime)

    - by Luke Mcneice
    Hi all, I have 3 data graphs that are painted via the their paint events. When I have data that I need to insert into the graph I call the controls invalidate() command. The first control's paint event actually creates a bitmap buffer for the other 2 graphs to avoid repeating a long loop. So the invalidate commands are in a specific order (1,2,3). This works well, however when the graphed data reaches the end of the graph window (PictureBox) where the data would normally start scrolling, the paint events begin firing in the wrong order (2,3,1). has anyone came across this before? why might this be happening?

    Read the article

  • mingw32-make : "Input line too long" issue

    - by hjsblogger
    We have a Makefile which runs on a Windows 2003 machine and we are using mingw32-make for it. Since the Makefile has many include paths, it exceeds the buffer size of 8K that the cmd can handle [Ref - http://support.microsoft.com/kb/830473/EN-US/due to which the compilation results in "input line too long" issue. I wanted to know the following - What would be the best way to optimize the Makefile as I have already removed unwanted compiler switches, include paths etc. Is there any way we can put all the INCLUDE paths in one .txt file and import it in the Makefile.I could not find any mingw32 documentation for the same. Any other inputs/hints/reference links are most welcome. Thanks, -HJSblogger

    Read the article

  • Serializing chinese characters with Xerces 2.6

    - by Gianluca
    I have a Xerces (2.6) DOMNode object encoded UTF-8. I use to read its TEXT element like this: CBuffer DomNodeExtended::getText( const DOMNode* node ) const { char* p = XMLString::transcode( node->getNodeValue( ) ); CBuffer xNodeText( p ); delete p; return xNodeText; } Where CBuffer is, well, just a buffer object which is lately persisted as it is in a DB. This works until in the TEXT there are just common ASCII characters. If we have i.e. chinese ones they get lost in the transcode operation. I've googled a lot seeking for a solution. It looks like with Xerces 3, the DOMWriter class should solve the problem. With Xerces 2.6 I'm trying the XMLTranscoder, but no success yet. Could anybody help?

    Read the article

  • Audio -- How much performance improvement can I expect from from reducing function calls by using bu

    - by morgancodes
    I'm working on an audio-intensive app for the iPhone. I'm currently calling a number of different functions for each sample I need to calculate. For example, I have an envelope class. When I calculate a sample, I do something like: sampleValue = oscilator->tic() * envelope->tic(); But I could also do something like: for(int i = 0; i < bufferLength; i++){ buffer[i] = oscilatorBuffer[i] * evelopeBuffer[i]; } I know the second will be more efficient, but don't know by how much. Are function calls expensive enough that I'd be crazy not to use buffers if I care event a tiny bit about performance?

    Read the article

  • casting void* to float* creates only zeros

    - by Paperflyer
    I am reading an audio file using CoreAudio (Extended Audio File Read Services). The audio data gets converted to 4-byte float and handed to me as a void* buffer. It can be played with Audio Queue Services, so its content is correct. Next, I want to draw a waveform and thus need access to the actual samples. So, I cast void* audioData to float*: Float32 *floatData = (Float32 *)audioData; When accessing this data however, I only get 0.0 regardless of the index. Float32 value = floatData[index]; // Is always zero for any index Am I doing something wrong with the cast?

    Read the article

  • Pipe less to emacs

    - by Steve
    When viewing piped output to Less, sometimes I'd like to be able to view it in Emacs in order to get syntax highlighting and use emacs commands for searching, marking, copying, etc. I see that Less has a v command that can be used to open the currently viewed file in $EDITOR. Unfortunately this doesn't work when viewing piped input. Also, I don't know how to get Emacs to display stdin as a read-only document. So, is it possible to set up Less with something like v but that pumps the current buffer into Emacs as a read-only file? Thanks.

    Read the article

  • Is writing to a socket an arbitrary limitation of the sendfile() syscall?

    - by Sufian
    Prelude sendfile() is an extremely useful syscall for two reasons: First, it's less code than a read()/write() (or recv()/send() if you prefer that jive) loop. Second, it's faster (less syscalls, implementation may copy between devices without buffer, etc...) than the aforementioned methods. Less code. More efficient. Awesome. In UNIX, everything is (mostly) a file. This is the ugly territory from the collision of platonic theory and real-world practice. I understand that sockets are fundamentally different than files residing on some device. I haven't dug through the sources of Linux/*BSD/Darwin/whatever OS implements sendfile() to know why this specific syscall is restricted to writing to sockets (specifically, streaming sockets). I just want to know... Question What is limiting sendfile() from allowing the destination file descriptor to be something besides a socket (like a disk file, or a pipe)?

    Read the article

  • Logging Mechanism using memory mapping technique

    - by Tushar
    Just create a mapping of the file of the required size (CreateFileMapping or mmap), write the lines in the buffer and start over when the maximum number is reached. -- Your answer for write-a-circular-file-in-c. I am also writing the LogWriter module. In this caase i am mapping the whole file to the memory using mmap(). I am maintaining the Read and Write pointers.I want to write the log to the file in append mode. Then when logger service is started first time it writes it appends the logs. But when system gets shutdown next time when i run the service it doesn't append the data at the end. I want to maintain the write and read offsets even if system shut down.How to achieve this ..? How to find the how much data is written to the log file. ??

    Read the article

  • Python pixel manipulation library

    - by silinter
    So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast. My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem). I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program. My program will chiefly be dealing in loading images and writing/reading to/from surfaces.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • python socket.recv/sendall call blocking

    - by fsm
    Hi everyone. This post is incorrectly tagged 'send' since I cannot create new tags. I have a very basic question about this simple echo server. Here are some code snippets. client while True: data = raw_input("Enter data: ") mySock.sendall(data) echoedData = mySock.recv(1024) if not echoedData: break print echoedData server while True: print "Waiting for connection" (clientSock, address) = serverSock.accept() print "Entering read loop" while True: print "Waiting for data" data = clientSock.recv(1024) if not data: break clientSock.send(data) clientSock.close() Now this works alright, except when the client sends an empty string (by hitting the return key in response to "enter data: "), in which case I see some deadlock-ish behavior. Now, what exactly happens when the user presses return on the client side? I can only imagine that the sendall call blocks waiting for some data to be added to the send buffer, causing the recv call to block in turn. What's going on here? Thanks for reading!

    Read the article

  • Python Bitstream implementations

    - by Danielb
    I am writing a huffman implementation in Python as a learning exercise. I have got to the point of writing out my variable length huffman codes to a buffer (or file). Only to find there does not seem to be a bitstream class implemented by Python! I have had a look at the array and struct modules but they do not seem to do what I need without extra work. A bit of goggling turned up this bitstream implementation, which is more like what I am wanting. Is there really no comparable bitstream class in the Python standard library?

    Read the article

  • How to create a Binary Tree from a General Tree?

    - by mno4k
    I have to solve the following constructor for a BinaryTree class in java: BinaryTree(GeneralTree<T> aTree) This method should create a BinaryTree (bt) from a General Tree (gt) as follows: Every Vertex from gt will be represented as a leaf in bt. If gt is a leaf, then bt will be a leaf with the same value as gt If gt is not a leaf, then bt will be constructed as an empty root, a left subTree (lt) and a right subTree (lr). Lt is a stric binary tree created from the oldest subtree of gt (the left-most subtree) and lr is a stric binary tree created from gt without its left-most subtree. The frist part is trivial enough, but the second one is giving me some trouble. I've gotten this far: public BinaryTree(GeneralTree<T> aTree){ if (aTree.isLeaf()){ root= new BinaryNode<T>(aTree.getRootData()); }else{ root= new BinaryNode<T>(null); // empty root LinkedList<GeneralTree<T>> childs = aTree.getChilds(); // Childs of the GT are implemented as a LinkedList of SubTrees child.begin(); //start iteration trough list BinaryTree<T> lt = new BinaryTree<T>(childs.element(0)); // first element = left-most child this.addLeftChild(lt); aTree.DeleteChild(hijos.elemento(0)); BinaryTree<T> lr = new BinaryTree<T>(aTree); this.addRightChild(lr); } } Is this the right way? If not, can you think of a better way to solve this? Thank you!

    Read the article

  • Why is a fixed size buffers (arrays) must be unsafe?

    - by brickner
    Let's say I want to have a value type of 7 bytes (or 3 or 777). I can define it like that: public struct Buffer71 { public byte b0; public byte b1; public byte b2; public byte b3; public byte b4; public byte b5; public byte b6; } A simpler way to define it is using a fixed buffer public struct Buffer72 { public unsafe fixed byte bs[7]; } Of course the second definition is simpler. The problem lies with the unsafe keyword that must be provided for fixed buffers. I understand that this is implemented using pointers and hence unsafe. My question is why does it have to be unsafe? Why can't C# provide arbitrary constant length arrays and keep them as a value type instead of making it a C# reference type array or unsafe buffers?

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • Thread safe lockfree mutual ByteArray queue

    - by user313421
    A byte stream should be transferred and there is one producer thread and a consumer one. Speed of producer is higher than consumer most of the time, and I need enough buffered data for QoS of my application. I read about my problem and there are solutions like shared buffer, PipeStream .NET class ... This class is going to be instantiated many times on server so I need and optimized solution. Is it good idea to use a Queue of ByteArray ? If yes, I'll use an optimization algorithm to guess the Queue size and each ByteArray capacity and theoretically it fits my case. If no, I what's the best approach ? Please let me know if there's a good lock free thread safe implementation of ByteArray Queue in C# or VB. Thanks in advance

    Read the article

  • Launching Vim via Lua

    - by Keith Pimmel
    I'm writing a simple little Lua commandline app that will build a static website. I'm storing my fragments in a sqlite database. Retrieving the data from the db is straightforward as is saving it; my question comes from editing the data. Is there an elegant way to pipe the data from Lua to vim? Can vim edit a memory buffer and return it? I was planning on launching the editor via os.execute('vim') but only after grabbing a temporary file handle and dumping the database output into that. I would like to have to avoid touching the filesystem that way but that is my contingency plan.

    Read the article

  • How to send large objects using boost::asio

    - by Max
    Good day. I'm receiving a large objects via the net using boost::asio. And I have a code: for (int i = 1; i <= num_packets; i++) boost::asio::async_read(socket_, boost::asio::buffer(Obj + packet_size * (i - 1), packet_size), boost::bind(...)); Where My_Class * Obj. I'm in doubt if that approach possible (because i have a pointer to an object here)? Or how it would be better to receive this object using packets of fixed size in bytes? Thanks in advance.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >