Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 83/151 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Conversion from Iphone Core Surface RGB Frame into ffmepg AVFarme

    - by Sridhar
    Hello, I am trying to convert Core Surface RGB frame buffer(Iphone) to ffmpeg Avfarme to encode into a movie file. But I am not getting the correct video output (video showing colors dazzling not the correct picture) I guess there is something wrong with converting from core surface frame buffer into AVFrame. Here is my code : Surface *surface = [[Surface alloc]initWithCoreSurfaceBuffer:coreSurfaceBuffer]; [surface lock]; unsigned int height = surface.height; unsigned int width = surface.width; unsigned int alignmentedBytesPerRow = (width * 4); if (!readblePixels) { readblePixels = CGBitmapAllocateData(alignmentedBytesPerRow * height); NSLog(@"alloced readablepixels"); } unsigned int bytesPerRow = surface.bytesPerRow; void *pixels = surface.baseAddress; for (unsigned int j = 0; j < height; j++) { memcpy(readblePixels + alignmentedBytesPerRow * j, pixels + bytesPerRow * j, bytesPerRow); } pFrameRGB->data[0] = readblePixels; // I guess here is what I am doing wrong. pFrameRGB->data[1] = NULL; pFrameRGB->data[2] = NULL; pFrameRGB->data[3] = NULL; pFrameRGB->linesize[0] = pCodecCtx->width; pFrameRGB->linesize[1] = 0; pFrameRGB->linesize[2] = 0; pFrameRGB->linesize[3] = 0; sws_scale (img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); Please help me out. Thanks, Raghu

    Read the article

  • Is there a way to receive receive data as unsugned char over UDP on QT

    - by user269037
    I need to send floating point numbers using UDP connection to a QT application. Now in QT the only function available is qint64 readDatagram ( char * data, qint64 maxSize, QHostAddress * address = 0, quint16 * port = 0 ) which accepts data in the form of signed character buffer. I can convert my float into a string and send it but it will obviously not be very efficient converting a 4 byte float into a much longer sized character buffer. I got hold of these 2 functions to convert a 4 byte float into an unsinged 32 bit integer to transfer over network which works fine for a simple c++ udp program but for QT I need to receive the data as unsigned char. Is it possible to avoid converting the floatinf point data into a string and then sending it ?? uint32_t htonf(float f) { uint32_t p; uint32_t sign; if (f < 0) { sign = 1; f = -f; } else { sign = 0; } p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // whole part and sign p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // fraction return p; } float ntohf(uint32_t p) { float f = ((p16)&0x7fff); // whole part f += (p&0xffff) / 65536.0f; // fraction if (((p>>31)&0x1) == 0x1) { f = -f; } // sign bit set return f; }

    Read the article

  • dumping the source code for an anonymous function

    - by intuited
    I'm working with a lot of anonymous functions, ie functions declared as part of a dictionary, aka "methods". It's getting pretty painful to debug, because I can't tell what function the errors are happening in. Vim's backtraces look like this: Error detected while processing function NamedFunction..2111..2105: line 1: E730: using List as a String This trace shows that the error occurred in the third level down the stack, on the first line of anonymous function #2105. IE NamedFunction called anonymous function #2111, which called anonymous function #2105. NamedFunction is one declared through the normal function NamedFunction() ... endfunction syntax; the others were declared using code like function dict.func() ... endfunction. So obviously I'd like to find out which function has number 2105. Assuming that it's still in scope, it's possible to find out what Dictionary entry references it by dumping all of the dictionary variables that might contain that reference. This is sort of awkward and it's difficult to be systematic about it, though I guess I could code up a function to search through all of the loaded dictionaries for a reference to that function, watching out for circular references. Although to be really thorough, it would have to search not only script-local and global dictionaries, but buffer-local dictionaries as well; is there a way to access another buffer's local variables? Anyway I'm wondering if it's possible to dump the source code for the anonymous function instead. This would be a lot easier and probably more reliable.

    Read the article

  • Creating a shim Stream

    - by spender
    A decompression API that I am using has the following API: Decode(Stream inStream,Stream outStream) I'd like to create a wrapper around this API, such that I can create my own Stream class which offers up the decoded data. Stream decodedStream=new BlaDecodeStream(inStream); So that I can than use this stream as a parameter to the XmlReader constructor in the same way one might use the System.IO.Compression.GZipStream. As far as I can tell, the only other option is set outStream stream to a MemoryStream or to a FileStream and go in two hops. The files I am dealing with are enormous, so neither of these options are particularly attractive. Before I go reinventing the wheel, is there any prior art that I might be able to draw from, or something in the BCL I might have missed? The CircularStream implementation here would go some of the way to helping, but I'm really looking for something similar that would block (as opposed to over/underrun) when the Stream's internal buffer is 'empty' when reading from it and block when the internal buffer is full when writing to it. In this way it could serve as parameter outStream and simultaneously (i.e. from another thread) could be read from by the XmlReader.

    Read the article

  • elisp compile, add a regexp to error detection

    - by Gauthier
    I am starting with emacs, and don't know much elisp. Nearly nothing, really. I want to use ack as a replacement of grep. These are the instructions I followed to use ack from within emacs: http://www.rooijan.za.net/?q=ack_el Now I don't like the output format that is used in this el file, I would like the output to be that of ack --group. So I changed: (read-string "Ack arguments: " "-i" nil "-i" nil) to: (read-string "Ack arguments: " "-i --group" nil "-i --group" nil) So far so good. But this made me lose the ability to click-press_enter on the rows of the output buffer. In the original behaviour, compile-mode was used to be able to jump to the selected line. I figured I should add a regexp to the ack-mode. The ack-mode is defined like this: (define-compilation-mode ack-mode "Ack" "Specialization of compilation-mode for use with ack." nil) and I want to add the regexp [0-9]+: to be detected as an error too, since it is what every row of the output bugger includes (line number). I've tried to modify the define-compilation-modeabove to add the regexp, but I failed miserably. How can I make the output buffer of ack let me click on its rows? --- EDIT, I tried also: --- (defvar ack-regexp-alist '(("[0-9]+:" 2 3)) "Alist that specifies how to match rows in ack output.") (setq compilation-error-regexp-alist (append compilation-error-regexp-alist ack-regexp-alist)) I stole that somewhere and tried to adapt to my needs. No luck.

    Read the article

  • Reading a Serial Port - Ignore portion of data written to serial port for certain time

    - by farmerjoe
    I would like to read data coming and Arduino on a serial port on intervals. So essentially something like Take a reading Wait Take a reading Wait Take ... etc. The problem I am facing is that the port will buffer its information so as soon as I call a wait function the data on the serial port will start buffering. Once the wait function finishes I try and read the data again but I am reading from the beginning of the buffer and the data is not current anymore, but instead is the reading taken at roughly the time the wait function began. My question is whether there is a way that I am unaware of to ignore the portion of data read in during that wait period and only read what is currently being delivered on the serial port? I have this something analogous to this so far: import serial s = serial.Serial(path_to_my_serial_port,9600) while True: print s.readline() time.sleep(.5) For explanation purposes I have the Arduino outputting the time since it began its loop. By the python code, the time of each call should be a half second apart. By the serial output the time is incrementing in less than a millisecond. These values do not change regardless of the sleep timing. Sample output: 504 504 504 504 505 505 505 ... As an idea of my end goal, I would like to measure the value of the port, wait a time delay, see what the value is then, wait again, see what the value is then, wait again. I am currently using Python for this but am open to other languages.

    Read the article

  • Having trouble with extension methods for byte arrays

    - by Dave
    I'm working with a device that sends back an image, and when I request an image, there is some undocumented information that comes before the image data. I was only able to realize this by looking through the binary data and identifying the image header information inside. I've been able to make everything work fine by writing a method that takes a byte[] and returns another byte[] with all of this preamble "stuff" removed. However, what I really want is an extension method so I can write image_buffer.RemoveUpToByteArray(new byte[] { 0x42, 0x4D }); instead of byte[] new_buffer = RemoveUpToByteArray( image_buffer, new byte[] { 0x42, 0x4D }); I first tried to write it like everywhere else I've seen online: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this byte[] buffer, byte[] header) { ... } } but then I get an error complaining that there isn't an extension method where the first parameter is a System.Array. Weird, everyone else seems to do it this way, but okay: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this Array buffer, byte[] header) { ... } } Great, that takes now, but still doesn't compile. It doesn't compile because Array is an abstract class and my existing code that gets called after calling RemoveUpToByteArray used to work on byte arrays. I could rewrite my subsequent code to work with Array, but I am curious -- what am I doing wrong that prevents me from just using byte[] as the first parameter in my extension method?

    Read the article

  • Problems with native Win32api RichEdit control and its IRichEditOle interface

    - by Michael
    Hi All! As part of writing custom command (dll with class that implements Interwoven command interface) for one of Interwoven Worksite dialog boxes,I need to extract information from RichEdit textbox. The only connection to the existing dialog box is its HWND handle; Seemingly trivial task , but I got stuck : Using standard win32 api functions (like GetDlgItemText) returns empty string. After using Spy++ I noticed that the dialog box gets IRichEditOle interface and seems to encapsulate the string into OLE object. Here is what I tried to do: IRichEditOle richEditOleObj = null; IntPtr ppv = IntPtr.Zero; Guid guid = new Guid("00020D00-0000-0000-c000-000000000046"); Marshal.QueryInterface(pRichEdit, ref guid, out ppv); richEditOleObj = (IRichEditOle)Marshal.GetTypedObjectForIUnknown(ppv,typeof(IRichEditOle)); judging by GetObjectCount() method of the interface there is exactly one object in the textbox - most likely the string I need to extract. I used GetObject() method and got IOleObject interface via QueryInterface : if (richEditOleObj.GetObject(0, reObject, GetObjectOptions.REO_GETOBJ_ALL_INTERFACES) == 0) //S_OK { IntPtr oleObjPpv = IntPtr.Zero; try { IOleObject oleObject = null; Guid objGuid = new Guid("00000112-0000-0000-C000-000000000046"); Marshal.QueryInterface(reObject.poleobj, ref objGuid, out oleObjPpv); oleObject = (IOleObject)Marshal.GetTypedObjectForIUnknown(oleObjPpv, typeof(IOleObject)); To negate other possibilites I tried to QueryInteface IRichEditOle to ITextDocument but this also returned empty string; tried to send EM_STREAMOUT message and read buffer returned from callback - returned empty buffer. And on this point I got stuck. Googling didn't help much - couldn't find anything that was relevant to my issue - it seems that vast majority of examples on the net about IRichEditOle and RichEdit revolve around inserting bitmap into RichEdit control... Now since I know only basic stuff about COM and OLE , I guess I am missing something important here. I would appreciate any thoughts suggestions or remarks.

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • C# TCP Async EndReceive() throws InvalidOperationException ONLY on Windows XP 32-bit

    - by James Farmer
    I have a simple C# Async Client using a .NET socket that waits for timed messages from a local Java server used for automating commands. The messages come in asynchronously and is written to a ring buffer. This implementation seems to work fine on Windows Vista/7/8 and OSX, but will randomly throw this exception while it's receiving a message from the local Java server: Unhandled Exception: System.InvalidOperationException: EndReceive can only be called once for each asynchronous operation.     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)     at SocketTest.Controller.RecvAsyncCallback(IAsyncResult ar)     at System.Net.LazyAsyncResult.Complete(IntPtr userToken)     ... I've looked online for this error, but have found nothing really helpful. This is the code where it seems to break: /// <summary> /// Callback to receive socket data /// </summary> /// <param name="ar">AsyncResult to pass to End</param> private void RecvAsyncCallback(IAsyncResult ar) { // The exception will randomly happen on this call int bytes = _socket.EndReceive(_recvAsyncResult); // check for connection closed if (bytes == 0) { return; } _ringBuffer.Write(_buffer, 0, bytes); // Checks buffer CheckBuffer(); _recvAsyncResult = _sock.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, RecvAsyncCallback, null); } The error doesn't happen on any particular moment except in the middle of receiving a message. The message itself can be any length for this to happen, and the exception can happen right away, or sometimes even up to a minute of perfect communication. I'm pretty new with sockets and network communication, and I feel I might be missing something here. I've tested on at least 8 different computers, and the only similarity with the computers that throw this exception is that their OS is Windows XP 32-bit.

    Read the article

  • How do I handle a POST request in Perl and FastCGI?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • Which coding system should I use in Emacs?

    - by Vivi
    I am a newbie in Emacs, and I am not a programmer. I have just tried to save a simple *.rtf file with some websites and tips on how to use emacs and I got These default coding systems were tried to encode text in the buffer `notes.rtf': (iso-latin-1-dos (315 . 8216) (338 . 8217) (1514 . 8220) (1525 . 8221)) However, each of them encountered characters it couldn't encode: iso-latin-1-dos cannot encode these: ‘ ’ “ ” .... etc, etc, etc Now what is that? Now it is asking me to chose an encoding system Select coding system (default chinese-iso-8bit): I don't even know what an encoding system is, and I would rather not have to choose one every time I try and save a document... Is there any way I can set an encoding system that will work with all my files so I don't have to worry about this? I saw another question and asnswer elsewhere in this website (see it here) and it seems that if I type the following (defun set-coding-system () (setq buffer-file-coding-system 'utf-8-unix)) (add-hook 'find-file-hook 'set-coding-system) then I can have Emacs do this, but I am not sure... Can someone confirm this to me? Thanks so much :)

    Read the article

  • udp can not receive any data

    - by StoneHeart
    Here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First I broadcast a message and then I wait for a response from the server. A test I made with the server written in C++ and running in Windows Vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast, but client can not receive anything from server. I try to write a client with C++ and it work like a charm, I think my problem is in C# client.

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • Bad crypto error in .NET 4.0

    - by Andrey
    Today I moved my web application to .net 4.0 and Forms Auth just stopped working. After several hours of digging into my SqlMembershipProvider (simplified version of built-in SqlMembershipProvider), I found that HMACSHA256 hash is not consistent. This is the encryption method: internal string EncodePassword(string pass, int passwordFormat, string salt) { if (passwordFormat == 0) // MembershipPasswordFormat.Clear return pass; byte[] bIn = Encoding.Unicode.GetBytes(pass); byte[] bSalt = Convert.FromBase64String(salt); byte[] bAll = new byte[bSalt.Length + bIn.Length]; byte[] bRet = null; Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length); Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length); if (passwordFormat == 1) { // MembershipPasswordFormat.Hashed HashAlgorithm s = HashAlgorithm.Create( Membership.HashAlgorithmType ); bRet = s.ComputeHash(bAll); } else { bRet = EncryptPassword( bAll ); } return Convert.ToBase64String(bRet); } Passing the same password and salt twice returns different results!!! It was working perfectly in .NET 3.5 Anyone aware of any breaking changes, or is it a known bug? UPDATE: When I specify SHA512 as hashing algorithm, everything works fine, so I do believe it's a bug in .NET 4.0 crypto Thanks! Andrey

    Read the article

  • Perl: POST request how?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following peace of code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • GDI+ double buffering in C++

    - by David Titarenco
    I haven't written anything with GDI for a while now (and never with GDI+), and I'm just working on a fun project, but for the life of me, I can't figure out how to double buffer GDI+ void DrawStuff(HWND hWnd) { HDC hdc; HDC hdcBuffer; PAINTSTRUCT ps; hdc = BeginPaint(hWnd, &ps); hdcBuffer = CreateCompatibleDC(hdc); Graphics graphics(hdc); graphics.Clear(Color::Black); // drawing stuff, i.e. bunnies: Image bunny(L"bunny.gif"); graphics.DrawImage(&bunny, 0, 0, bunny.GetWidth(), bunny.GetHeight()); BitBlt(hdc, 0,0, WIDTH , HEIGHT, hdcBuffer, 0,0, SRCCOPY); EndPaint(hWnd, &ps); } The above works (everything renders perfectly), but it flickers. If I change Graphics graphics(hdc); to Graphics graphics(hdcBuffer);, I see nothing (although I should be bitblt'ing the buffer-hWnd hdc at the bottom). My message pipeline is set up properly (WM_PAINT calls DrawStuff), and I'm forcing a WM_PAINT message every program loop by calling RedrawWindow(window, NULL, NULL, RDW_ERASE | RDW_INVALIDATE | RDW_UPDATENOW); I'm probably going about the wrong way to do this, any ideas? The MSDN documentation is cryptic at best.

    Read the article

  • java BufferedReader specific length returns NUL characters

    - by Bastien
    I have a TCP socket client receiving messages (data) from a server. messages are of the type length (2 bytes) + data (length bytes), delimited by STX & ETX characters. I'm using a bufferedReader to retrieve the two first bytes, decode the length, then read again from the same bufferedReader the appropriate length and put the result in a char array. most of the time, I have no problem, but SOMETIMES (1 out of thousands of messages received), when attempting to read (length) bytes from the reader, I get only part of it, the rest of my array being filled with "NUL" characters. I imagine it's because the buffer has not yet been filled. char[] bufLen = new char[2]; _bufferedReader.read(bufLen); int len = decodeLength(bufLen); char[] _rawMsg = new char[len]; _bufferedReader.read(_rawMsg); return _rawMsg; I solved the problem in several iterative ways: first I tested the last char of my array: if it wasn't ETX I would read chars from the bufferedReader one by one until I would reach ETX, then start over my regular routine. the consequence is that I would basically DROP one message. then, in order to still retrieve that message, I would find the first occurence of the NUL char in my "truncated" message, read & store additional characters one at a time until I reached ETX, and append them to my "truncated" messages, confirming length is ok. it works also, but I'm really thinking there's something I could do better, like checking if the total number of characters I need are available in the buffer before reading it, but can't find the right way to do it... any idea / pointer ? thanks !

    Read the article

  • Strip parity bits in C from 8 bits of data followed by 1 parity bit

    - by dubnde
    I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets. Example (p are parity bits): 0001 0001 p000 0100 0p00 0001 00p01 1100 ... should become 0001 0001 0000 1000 0000 0100 0111 00 ... Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this? This is related to another question asked here sometime back. This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Problem creating socket with C++ in winsock2

    - by Ash85
    Hi, I'm having the weirdest problem causing me headaches. Consider the following code: // Create and bind socket std::map<Connection, bool> clients; unsigned short port=6222; struct sockaddr_in local_address, from_address; int result; char buffer[10000]; SOCKET receive_socket; local_address.sin_family = AF_INET; local_address.sin_addr.s_addr = INADDR_ANY; local_address.sin_port = htons(port); receive_socket = socket(AF_INET,SOCK_DGRAM,0); What's happening is receive_socket is not binding, I get SOCKET_ERROR. When I debug the program and check receive_socket, it appears to just be garbled crap. I put a breakpoint on the 'std::map' line. When I step into each line of the above code, the debug cursor jumps straight from the 'unsigned short port' line to the first 'local_address.sin' line, even though I am using step into (F11), it does not stop at struct, int, char or SOCKET lines, it jumps straight over them. At this point I hover my mouse over local_address, from_address, result, buffer and receive_socket. They are all full of garbled crap. Is this because I have not defined these variables yet? I've also noticed that when I reach the bottom of the above code, local_address.sin_port is set to 19992, but it should be 6222?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >