Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 266/1118 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Reading a Serial Port - Ignore portion of data written to serial port for certain time

    - by farmerjoe
    I would like to read data coming and Arduino on a serial port on intervals. So essentially something like Take a reading Wait Take a reading Wait Take ... etc. The problem I am facing is that the port will buffer its information so as soon as I call a wait function the data on the serial port will start buffering. Once the wait function finishes I try and read the data again but I am reading from the beginning of the buffer and the data is not current anymore, but instead is the reading taken at roughly the time the wait function began. My question is whether there is a way that I am unaware of to ignore the portion of data read in during that wait period and only read what is currently being delivered on the serial port? I have this something analogous to this so far: import serial s = serial.Serial(path_to_my_serial_port,9600) while True: print s.readline() time.sleep(.5) For explanation purposes I have the Arduino outputting the time since it began its loop. By the python code, the time of each call should be a half second apart. By the serial output the time is incrementing in less than a millisecond. These values do not change regardless of the sleep timing. Sample output: 504 504 504 504 505 505 505 ... As an idea of my end goal, I would like to measure the value of the port, wait a time delay, see what the value is then, wait again, see what the value is then, wait again. I am currently using Python for this but am open to other languages.

    Read the article

  • Having trouble with extension methods for byte arrays

    - by Dave
    I'm working with a device that sends back an image, and when I request an image, there is some undocumented information that comes before the image data. I was only able to realize this by looking through the binary data and identifying the image header information inside. I've been able to make everything work fine by writing a method that takes a byte[] and returns another byte[] with all of this preamble "stuff" removed. However, what I really want is an extension method so I can write image_buffer.RemoveUpToByteArray(new byte[] { 0x42, 0x4D }); instead of byte[] new_buffer = RemoveUpToByteArray( image_buffer, new byte[] { 0x42, 0x4D }); I first tried to write it like everywhere else I've seen online: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this byte[] buffer, byte[] header) { ... } } but then I get an error complaining that there isn't an extension method where the first parameter is a System.Array. Weird, everyone else seems to do it this way, but okay: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this Array buffer, byte[] header) { ... } } Great, that takes now, but still doesn't compile. It doesn't compile because Array is an abstract class and my existing code that gets called after calling RemoveUpToByteArray used to work on byte arrays. I could rewrite my subsequent code to work with Array, but I am curious -- what am I doing wrong that prevents me from just using byte[] as the first parameter in my extension method?

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • C# TCP Async EndReceive() throws InvalidOperationException ONLY on Windows XP 32-bit

    - by James Farmer
    I have a simple C# Async Client using a .NET socket that waits for timed messages from a local Java server used for automating commands. The messages come in asynchronously and is written to a ring buffer. This implementation seems to work fine on Windows Vista/7/8 and OSX, but will randomly throw this exception while it's receiving a message from the local Java server: Unhandled Exception: System.InvalidOperationException: EndReceive can only be called once for each asynchronous operation.     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)     at SocketTest.Controller.RecvAsyncCallback(IAsyncResult ar)     at System.Net.LazyAsyncResult.Complete(IntPtr userToken)     ... I've looked online for this error, but have found nothing really helpful. This is the code where it seems to break: /// <summary> /// Callback to receive socket data /// </summary> /// <param name="ar">AsyncResult to pass to End</param> private void RecvAsyncCallback(IAsyncResult ar) { // The exception will randomly happen on this call int bytes = _socket.EndReceive(_recvAsyncResult); // check for connection closed if (bytes == 0) { return; } _ringBuffer.Write(_buffer, 0, bytes); // Checks buffer CheckBuffer(); _recvAsyncResult = _sock.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, RecvAsyncCallback, null); } The error doesn't happen on any particular moment except in the middle of receiving a message. The message itself can be any length for this to happen, and the exception can happen right away, or sometimes even up to a minute of perfect communication. I'm pretty new with sockets and network communication, and I feel I might be missing something here. I've tested on at least 8 different computers, and the only similarity with the computers that throw this exception is that their OS is Windows XP 32-bit.

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • How do I handle a POST request in Perl and FastCGI?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • Which coding system should I use in Emacs?

    - by Vivi
    I am a newbie in Emacs, and I am not a programmer. I have just tried to save a simple *.rtf file with some websites and tips on how to use emacs and I got These default coding systems were tried to encode text in the buffer `notes.rtf': (iso-latin-1-dos (315 . 8216) (338 . 8217) (1514 . 8220) (1525 . 8221)) However, each of them encountered characters it couldn't encode: iso-latin-1-dos cannot encode these: ‘ ’ “ ” .... etc, etc, etc Now what is that? Now it is asking me to chose an encoding system Select coding system (default chinese-iso-8bit): I don't even know what an encoding system is, and I would rather not have to choose one every time I try and save a document... Is there any way I can set an encoding system that will work with all my files so I don't have to worry about this? I saw another question and asnswer elsewhere in this website (see it here) and it seems that if I type the following (defun set-coding-system () (setq buffer-file-coding-system 'utf-8-unix)) (add-hook 'find-file-hook 'set-coding-system) then I can have Emacs do this, but I am not sure... Can someone confirm this to me? Thanks so much :)

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • udp can not receive any data

    - by StoneHeart
    Here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First I broadcast a message and then I wait for a response from the server. A test I made with the server written in C++ and running in Windows Vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast, but client can not receive anything from server. I try to write a client with C++ and it work like a charm, I think my problem is in C# client.

    Read the article

  • Bad crypto error in .NET 4.0

    - by Andrey
    Today I moved my web application to .net 4.0 and Forms Auth just stopped working. After several hours of digging into my SqlMembershipProvider (simplified version of built-in SqlMembershipProvider), I found that HMACSHA256 hash is not consistent. This is the encryption method: internal string EncodePassword(string pass, int passwordFormat, string salt) { if (passwordFormat == 0) // MembershipPasswordFormat.Clear return pass; byte[] bIn = Encoding.Unicode.GetBytes(pass); byte[] bSalt = Convert.FromBase64String(salt); byte[] bAll = new byte[bSalt.Length + bIn.Length]; byte[] bRet = null; Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length); Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length); if (passwordFormat == 1) { // MembershipPasswordFormat.Hashed HashAlgorithm s = HashAlgorithm.Create( Membership.HashAlgorithmType ); bRet = s.ComputeHash(bAll); } else { bRet = EncryptPassword( bAll ); } return Convert.ToBase64String(bRet); } Passing the same password and salt twice returns different results!!! It was working perfectly in .NET 3.5 Anyone aware of any breaking changes, or is it a known bug? UPDATE: When I specify SHA512 as hashing algorithm, everything works fine, so I do believe it's a bug in .NET 4.0 crypto Thanks! Andrey

    Read the article

  • Perl: POST request how?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following peace of code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • 3 index buffers

    - by bobobobo
    So, in both D3D and OpenGL there's ability to draw from an index buffer. The OBJ file format however does something weird. It specifies a bunch of vertices like: v -21.499660 6.424470 4.069845 v -25.117170 6.418100 4.068025 v -21.663851 8.282170 4.069585 v -21.651890 6.420180 4.068675 v -25.128481 8.281520 4.069585 Then it specifies a bunch of normals like.. vn 0.196004 0.558984 0.805680 vn -0.009523 0.210194 -0.977613 vn -0.147787 0.380832 -0.912757 vn 0.822108 0.567581 0.044617 vn 0.597037 0.057507 -0.800150 vn 0.809312 -0.045432 0.585619 Then it specifies a bunch of tex coords like vt 0.1225 0.5636 vt 0.6221 0.1111 vt 0.4865 0.8888 vt 0.2862 0.2586 vt 0.5865 0.2568 vt 0.1862 0.2166 THEN it specifies "faces" on the model like: f 1/2/5 2/3/7 8/2/6 f 5/9/7 6/3/8 5/2/1 So, in trying to render this with vertex buffers, In OpenGL I can use glVertexPointer, glNormalPointer and glTexCoordPointer to set pointers to each of the vertex, normal and texture coordinate arrays respectively.. but when it comes down to drawing with glDrawElements, I can only specify ONE set of indices, namely the indices it should use when visiting the vertices. Ok, then what? I still have 3 sets of indices to visit. In d3d its much the same - I can set up 3 streams: one for vertices, one for texcoords, and one for normals, but when it comes to using IDirect3DDevice9::DrawIndexedPrimitive, I can still only specify ONE index buffer, which will index into the vertices array. So, is it possible to draw from vertex buffers using different index arrays for each of the vertex, texcoord, and normal buffers (EITHER d3d or opengl!)

    Read the article

  • GDI+ double buffering in C++

    - by David Titarenco
    I haven't written anything with GDI for a while now (and never with GDI+), and I'm just working on a fun project, but for the life of me, I can't figure out how to double buffer GDI+ void DrawStuff(HWND hWnd) { HDC hdc; HDC hdcBuffer; PAINTSTRUCT ps; hdc = BeginPaint(hWnd, &ps); hdcBuffer = CreateCompatibleDC(hdc); Graphics graphics(hdc); graphics.Clear(Color::Black); // drawing stuff, i.e. bunnies: Image bunny(L"bunny.gif"); graphics.DrawImage(&bunny, 0, 0, bunny.GetWidth(), bunny.GetHeight()); BitBlt(hdc, 0,0, WIDTH , HEIGHT, hdcBuffer, 0,0, SRCCOPY); EndPaint(hWnd, &ps); } The above works (everything renders perfectly), but it flickers. If I change Graphics graphics(hdc); to Graphics graphics(hdcBuffer);, I see nothing (although I should be bitblt'ing the buffer-hWnd hdc at the bottom). My message pipeline is set up properly (WM_PAINT calls DrawStuff), and I'm forcing a WM_PAINT message every program loop by calling RedrawWindow(window, NULL, NULL, RDW_ERASE | RDW_INVALIDATE | RDW_UPDATENOW); I'm probably going about the wrong way to do this, any ideas? The MSDN documentation is cryptic at best.

    Read the article

  • java BufferedReader specific length returns NUL characters

    - by Bastien
    I have a TCP socket client receiving messages (data) from a server. messages are of the type length (2 bytes) + data (length bytes), delimited by STX & ETX characters. I'm using a bufferedReader to retrieve the two first bytes, decode the length, then read again from the same bufferedReader the appropriate length and put the result in a char array. most of the time, I have no problem, but SOMETIMES (1 out of thousands of messages received), when attempting to read (length) bytes from the reader, I get only part of it, the rest of my array being filled with "NUL" characters. I imagine it's because the buffer has not yet been filled. char[] bufLen = new char[2]; _bufferedReader.read(bufLen); int len = decodeLength(bufLen); char[] _rawMsg = new char[len]; _bufferedReader.read(_rawMsg); return _rawMsg; I solved the problem in several iterative ways: first I tested the last char of my array: if it wasn't ETX I would read chars from the bufferedReader one by one until I would reach ETX, then start over my regular routine. the consequence is that I would basically DROP one message. then, in order to still retrieve that message, I would find the first occurence of the NUL char in my "truncated" message, read & store additional characters one at a time until I reached ETX, and append them to my "truncated" messages, confirming length is ok. it works also, but I'm really thinking there's something I could do better, like checking if the total number of characters I need are available in the buffer before reading it, but can't find the right way to do it... any idea / pointer ? thanks !

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Problem creating socket with C++ in winsock2

    - by Ash85
    Hi, I'm having the weirdest problem causing me headaches. Consider the following code: // Create and bind socket std::map<Connection, bool> clients; unsigned short port=6222; struct sockaddr_in local_address, from_address; int result; char buffer[10000]; SOCKET receive_socket; local_address.sin_family = AF_INET; local_address.sin_addr.s_addr = INADDR_ANY; local_address.sin_port = htons(port); receive_socket = socket(AF_INET,SOCK_DGRAM,0); What's happening is receive_socket is not binding, I get SOCKET_ERROR. When I debug the program and check receive_socket, it appears to just be garbled crap. I put a breakpoint on the 'std::map' line. When I step into each line of the above code, the debug cursor jumps straight from the 'unsigned short port' line to the first 'local_address.sin' line, even though I am using step into (F11), it does not stop at struct, int, char or SOCKET lines, it jumps straight over them. At this point I hover my mouse over local_address, from_address, result, buffer and receive_socket. They are all full of garbled crap. Is this because I have not defined these variables yet? I've also noticed that when I reach the bottom of the above code, local_address.sin_port is set to 19992, but it should be 6222?

    Read the article

  • Json HTTP Module stream issue

    - by Justin
    Hey, I have an HTTP Module that I use to clean up the JSON returned by my web service (see http://www.codeproject.com/KB/webservices/ASPNET_JSONP.aspx?msg=3400287#xx3400287xx for an example of this.) Basically it relates to calling cross-domain JSON web services from javascript. There is this JsonHttpModule which uses a JsonResponseFilter Stream class to write out the JSON and the overloaded Write method is supposed to wrap the name of the callback function around the JSON, otherwise the JSON errors out as needing a label. However, if the JSON is really long, the Write method in the Stream class is called multiple times, causing the callback function to incorrectly get inserted midway through the JSON. Is there a way in the Stream class to wrap the callback function around the stream at the end or to specify that it write all of the JSON in 1 Write method instead of in chunks?? Here's where it calls the JsonResponseFilter in the JsonHttpModule: public void OnReleaseRequestState(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; if (!_Apply(app.Context.Request)) return; // apply response filter to conform to JSONP app.Context.Response.Filter = new JsonResponseFilter(app.Context.Response.Filter, app.Context); } Here's the Write method in the JsonResponseFilter Stream class that gets called multiple times: public override void Write(byte[] buffer, int offset, int count) { var b1 = Encoding.UTF8.GetBytes(_context.Request.Params["callback"] + "("); _responseStream.Write(b1, 0, b1.Length); _responseStream.Write(buffer, offset, count); var b2 = Encoding.UTF8.GetBytes(");"); _responseStream.Write(b2, 0, b2.Length); } Thanks for any help! Justin

    Read the article

  • mprotect - how aligning to multiple of pagesize works?

    - by user299988
    Hi, I am not understanding the 'aligning allocated memory' part from the mprotect usage. I am referring to the code example given on http://linux.die.net/man/2/mprotect char *p; char c; /* Allocate a buffer; it will have the default protection of PROT_READ|PROT_WRITE. */ p = malloc(1024+PAGESIZE-1); if (!p) { perror("Couldn't malloc(1024)"); exit(errno); } /* Align to a multiple of PAGESIZE, assumed to be a power of two */ p = (char *)(((int) p + PAGESIZE-1) & ~(PAGESIZE-1)); c = p[666]; /* Read; ok */ p[666] = 42; /* Write; ok */ /* Mark the buffer read-only. */ if (mprotect(p, 1024, PROT_READ)) { perror("Couldn't mprotect"); exit(errno); } For my understanding, I tried using a PAGESIZE of 16, and 0010 as address of p. I ended up getting 0001 as the result of (((int) p + PAGESIZE-1) & ~(PAGESIZE-1)). Could you please clarify how this whole 'alignment' works? Thanks,

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Whats the wrong with this code?

    - by girinie
    Hi in this code first I am downloading a web-page source code then I am storing the code in text file. Again I am reading that file and matching with the regex to search a specific string. There is no compiler error. Exception in thread "main" java.lang.NoClassDefFoundError: java/lang/CharSequence Can anybody tell me Where I am wrong. import java.io.*; import java.net.*; import java.lang.*; import java.util.regex.Matcher; import java.util.regex.Pattern; public class WebDownload { public void getWebsite() { try{ URL url=new URL("www.gmail.com");// any URL can be given URLConnection urlc=url.openConnection(); BufferedInputStream buffer=new BufferedInputStream(urlc.getInputStream()); StringBuffer builder=new StringBuffer(); int byteRead; FileOutputStream fout; StringBuffer contentBuf = new StringBuffer(); while((byteRead=buffer.read()) !=-1) { builder.append((char)byteRead); fout = new FileOutputStream ("myfile3.txt"); new PrintStream(fout).println (builder.toString()); fout.close(); } BufferedReader in = new BufferedReader(new FileReader("myfile3.txt")); String buf = null; while ((buf = in.readLine()) != null) { contentBuf.append(buf);contentBuf.append("\n"); } in.close(); Pattern p = Pattern.compile("<div class=\"summarycount\">([^<]*)</div>"); Matcher matcher = p.matcher(contentBuf); if(matcher.find()) { System.out.println(matcher.group(1)); } else System.out.println("could not find"); } catch(MalformedURLException ex) { ex.printStackTrace(); } catch(IOException ex){ ex.printStackTrace(); } } public static void main(String [] args) { WebDownload web=new WebDownload(); web.getWebsite(); } }

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >