Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 120/238 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Sending the array of arbitrary length through a socket. Endianness.

    - by Negai
    Hi everyone, I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way. The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that? Thank you.

    Read the article

  • Getting exception when coming back while loading data in controller while using LibXml.

    - by user133611
    Hi All, In my project i using LibXml to parse data, when i select a row in first controller i will take to next conttroller where i will get data using libxml if i click on the back button while loading the page i am getting exception. if i click afetr loading is completed it is working fine ca any one help me. the exception is showing here (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(_xmlParserContext, (const char *)[data bytes], [data length], 0); } Thank You

    Read the article

  • Convert Delphi Real48 to C# double

    - by mjmcloug
    Hey, I need to be able to convert from a Delphi Real48 to C# double. I've got the bytes I need to convert but am looking for an elegant solution. to the problem. Anybody out there had to do this before? I'm needing to do the conversion in C# Thanks in advance

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • How do I convert byte to string?

    - by HardCoder1986
    Hello! Is there any fast way to convert given byte (like, by number - 65) to it's text hex representation? Basically, I want to convert array of bytes into (I am hardcoding resources) their code-representation like BYTE data[] = {0x00, 0x0A, 0x00, 0x01, ... } How do I automate this Given byte -> "0x0A" string conversion?

    Read the article

  • Combine hash values in C#

    - by Chris
    I'm creating a generic object collection class and need to implement a Hash function. I can obviously (and easily!) get the hash values for each object but was looking for the 'correct' way to combine them to avoid any issues. Does just adding, xoring or any basic operation harm the quality of the hash or am I going to have to do something like getting the objects as bytes, combining them and then hashing that? Cheers in advance

    Read the article

  • mem-leak freeing g_strdup

    - by Mike
    I'm trying to free g_strdup but I'm not sure what I'm doing wrong. Using valgrind --tool=memcheck --leak-check=yes ./a.out I keep getting: ==4506== 40 bytes in 10 blocks are definitely lost in loss record 2 of 9 ==4506== at 0x4024C1C: malloc (vg_replace_malloc.c:195) ==4506== by 0x40782E3: g_malloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4090CA8: g_strdup (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048722: add_inv (dup.c:26) ==4506== by 0x80487E6: main (dup.c:47) ==4506== 504 bytes in 1 blocks are possibly lost in loss record 4 of 9 ==4506== at 0x4023E2E: memalign (vg_replace_malloc.c:532) ==4506== by 0x4023E8B: posix_memalign (vg_replace_malloc.c:660) ==4506== by 0x408D61D: ??? (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x408E5AC: g_slice_alloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4061628: g_hash_table_new_full (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x40616C7: g_hash_table_new (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048795: main (dup.c:42) I've tried different ways to freed but no success so far. I'll appreciate any help. Thanks BTW: It compiles and runs fine. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <glib.h> #include <stdint.h> struct s { char *data; }; static GHashTable *hashtable1; static GHashTable *hashtable2; static void add_inv(GHashTable *table, const char *key) { gpointer old_value, old_key; gint value; if(g_hash_table_lookup_extended(table,key, &old_key, &old_value)){ value = GPOINTER_TO_INT(old_value); value = value + 2; /*g_free (old_key);*/ } else { value = 5; } g_hash_table_replace(table, g_strdup(key), GINT_TO_POINTER(value)); } static void print_hash_kv (gpointer key, gpointer value, gpointer user_data){ gchar *k = (gchar *) key; gchar *h = (gchar *) value; printf("%s: %d \n",k, h); } int main(int argc, char *argv[]){ struct s t; t.data = "bar"; int i,j; hashtable1 = g_hash_table_new(g_str_hash, g_str_equal); hashtable2 = g_hash_table_new(g_str_hash, g_str_equal); for(i=0;i<10;i++){ add_inv(hashtable1, t.data); add_inv(hashtable2, t.data); } /*free(t.data);*/ /*free(t.data);*/ g_hash_table_foreach (hashtable1, print_hash_kv, NULL); g_hash_table_foreach (hashtable2, print_hash_kv, NULL); g_hash_table_destroy(hashtable1); g_hash_table_destroy(hashtable2); return 0; }

    Read the article

  • Base X string encoding

    - by Paul Stone
    I'm looking for a routine that will encode a string (stream of bytes) into an arbitrary base/alphabet (like base64 encoding but I get to choose the alphabet). I've seen a few routines that do base X encoding for a number, but not for a string.

    Read the article

  • Emulator TCP Packet Size

    - by jpspringall
    Has anyone tried to do a tcp client server app using the emulator using the pc as a server and the phone as the client? I've got a bit of an issue where its only sending one packet, ie 1491 bytes of data regardless of how much there actually is to send, from the client(Phone) to the server(PC) Thanks James

    Read the article

  • Naive question of memory references in Operating system

    - by darkie15
    Hi All, I am learning memory references pertaining to Operating systems and don't seem to get to the crux of understanding it. For example, I am not able to visualize this scenario properly: "A 36 bit address employs both paging and segmentation. Both PTE and STE are 4 bytes each". How are they related? I can guess that this question might be too simple for many. But any help understanding the above basic concept would be appreciable. Regards, darkie15

    Read the article

  • C# System.Diagnostics.Process redirecting Standard Out for large amounts of data

    - by Matt
    I running an exe from a .NET app and trying to redirect standard out to a streamreader. The problem is that when I do myprocess.exe out.txt out.txt is close to 14mb. When I do the command line version it is very fast but when I run the process from my csharp app it is excruciatingly slow because I believe the default streamreader flushes every 4096 bytes. Is there a way to change the default stream reader for the Process object?

    Read the article

  • json specifies "any UNICODE character"?

    - by bukzor
    Maybe this is just my unfamiliarity with unicode, so please correct me if I'm mistaken. Looking at http://json.org/, the spec says that a string can include "any UNICODE character", but this confuses me. JSON is a communication format correct? At the core of it, everything must translate down to bytes. In contrast, UNICODE is a logical format and must be encoded to be able to transmit it, right? So what did they mean there?

    Read the article

  • copying a short int to a char array

    - by cateof
    I have a short integer variable called s_int that holds value = 2 unsighed short s_int = 2; I want to copy this number to a char array to the first and second position of a char array. Let's say we have char buffer[10];. We want the two bytes of s_int to be copied at buffer[0] and buffer[1]. How can I do it?

    Read the article

  • Array split by range?

    - by acidzombie24
    I have an array, i don't know the length but i do know it will be =48bytes. The first 48bytes are the header and i need to split the header into two. Whats the easiest way? I am hoping something as simple as header.split(32); would work ([0] is 32 bytes [1] being 16 assuming header is an array of 48bytes) using .NET

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • Is it worth investing time in learning low level Java?

    - by Kevin Rave
    Low level Java, I mean, bits, bytes, bit masking, GC internals, JVM stuff, etc in the following contexts: - When you are building an enterprise app using frameworks like Spring, Hybernate, etc. - Interviews for a Sr Java Developer position where you are expected work on a existing Enterprise App that was built using some frameworks (Spring, EJB, Hybernate,etc) - Architects (Java) I understand knowing the very low level is "good". But how often do you think / use of these in the real-world, unless you are developing something from the ground-up keeping performance in mind?

    Read the article

  • 7zip compressing the same file slightly different results

    - by MicMit
    I am using 7za command line File dfat_clist.xls in directory 2010_05_07 The same dfat_clist.xls in directory 2010_05_08 Zips are created in the same directory where the xls files resides string pars = "a -tzip \"" + Path.Combine( SourceDir,ZipName) + "\" \"" + Path.Combine( SourceDir, Mask ) + "\"" ; Parameters given to 7za are full paths for zip and xls. For some reason a couple of bytes are different A7 and A8 values for directories 2010_05_07 and 2010_05_8 respectively. How to achieve identical results and out of curiosity what causes this problem.

    Read the article

  • Printing page x of y in .Net

    - by maxfridbe
    If I have a very large document to print and on each page of the document it needs to say "page x of y" Is there a way I could precalculate y without having to printing twice as offered as a solution here: http://bytes.com/topic/c-sharp/answers/862133-c-printing-page-count I'm trying to avoid printing once, getting they y and then setting it, then printing again.

    Read the article

  • Serialize problem with cookie

    - by cagin
    Hi there, I want use cookie in my web project. I must serialize my classes. Although my code can seralize an int or string value, it cant seralize my classes. This is my seralize and cookie code : public static bool f_SetCookie(string _sCookieName, object _oCookieValue, DateTime _dtimeExpirationDate) { bool retval = true; try { if (HttpContext.Current.Request[_sCookieName] != null) { HttpContext.Current.Request.Cookies.Remove(_sCookieName); } BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); bf.Serialize(ms, _oCookieValue); byte[] bArr = ms.ToArray(); MemoryStream objStream = new MemoryStream(); DeflateStream objZS = new DeflateStream(objStream, CompressionMode.Compress); objZS.Write(bArr, 0, bArr.Length); objZS.Flush(); objZS.Close(); byte[] bytes = objStream.ToArray(); string sCookieVal = Convert.ToBase64String(bytes); HttpCookie cook = new HttpCookie(_sCookieName); cook.Value = sCookieVal; cook.Expires = _dtimeExpirationDate; HttpContext.Current.Response.Cookies.Add(cook); } catch { retval = false; } return retval; } And here is one of my classes: [Serializable] public class Tahlil { #region Props & Fields public string M_KlinikKodu{ get; set; } public DateTime M_AlinmaTarihi { get; set; } private List<Test> m_Tesler; public List<Test> M_Tesler { get { return m_Tesler; } set { m_Tesler = value; } } #endregion public Tahlil() {} Tahlil(DataRow _rwTahlil){} } I m calling my Set Cookie method: Tahlil t = new Tahlil(); t.M_AlinmaTarihi = DateTime.Now; t.M_KlinikKodu = "2"; t.M_Tesler = new List<Test>(); f_SetCookie("Tahlil", t, DateTime.Now.AddDays(1)); I cant see cookie in Cookie folder and Temporary Internet Files but if i will call method like that: f_SetCookie("TRY", 5, DateTime.Now.AddDays(1)); I can see cookie. What is the problem? I dont understand. Thank you for your helps.

    Read the article

  • youtube - video upload failure - unable to convert file - encoding the video wrong?

    - by Anthony
    I am using .NET to create a video uploading application. Although it's communicating with YouTube and uploading the file, the processing of that file fails. YouTube gives me the error message, "Upload failed (unable to convert video file)." This supposedly means that "your video is in a format that our converters don't recognize..." I have made attempts with two different videos, both of which upload and process fine when I do it manually. So I suspect that my code is a.) not encoding the video properly and/or b.) not sending my API request properly. Below is how I am constructing my API PUT request and encoding the video: Any suggestions on what the error could be would be appreciated. Thanks P.S. I'm not using the client library because my application will use the resumable upload feature. Thus, I am manually constructing my API requests. Documentation: http://code.google.com/intl/ja/apis/youtube/2.0/developers_guide_protocol_resumable_uploads.html#Uploading_the_Video_File Code: // new PUT request for sending video WebRequest putRequest = WebRequest.Create(uploadURL); // set properties putRequest.Method = "PUT"; putRequest.ContentType = getMIME(file); //the MIME type of the uploaded video file //encode video byte[] videoInBytes = encodeVideo(file); public static byte[] encodeVideo(string video) { try { byte[] fileInBytes = File.ReadAllBytes(video); Console.WriteLine("\nSize of byte array containing " + video + ": " + fileInBytes.Length); return fileInBytes; } catch (Exception e) { Console.WriteLine("\nException: " + e.Message + "\nReturning an empty byte array"); byte [] empty = new byte[0]; return empty; } }//encodeVideo //encode custom headers in a byte array byte[] PUTbytes = encode(putRequest.Headers.ToString()); public static byte[] encode(string headers) { ASCIIEncoding encoding = new ASCIIEncoding(); byte[] bytes = encoding.GetBytes(headers); return bytes; }//encode //entire request contains headers + binary video data putRequest.ContentLength = PUTbytes.Length + videoInBytes.Length; //send request - correct? sendRequest(putRequest, PUTbytes); sendRequest(putRequest, videoInBytes); public static void sendRequest(WebRequest request, byte[] encoding) { Stream stream = request.GetRequestStream(); // The GetRequestStream method returns a stream to use to send data for the HttpWebRequest. try { stream.Write(encoding, 0, encoding.Length); } catch (Exception e) { Console.WriteLine("\nException writing stream: " + e.Message); } }//sendRequest

    Read the article

  • TCP sequence number question

    - by Meta
    This is more of a theoretical question than an actual problem I have. If I understand correctly, the sequence number in the TCP header of a packet is the index of the first byte in the packet in the whole stream, correct? If that is the case, since the sequence number is an unsigned 32-bit integer, then what happens after more than FFFFFFFF = 4294967295 bytes are transferred? Will the sequence number wrap around, or will the sender send a SYN packet to restart at 0?

    Read the article

  • What's the purpose of the maxPostSize for Tomcat's HTTP Connector?

    - by Bytecode Ninja
    According to Tomcat docs: The maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to 0. If not specified, this attribute is set to 2097152 (2 megabytes). But what's "the container FORM URL parameter parsing"? Any ideas what is the purpose of "maxPostSize"? Thanks in advance.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >