Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 38/126 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • passing pipe to threads

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { int fd_reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { dup2(STDIN_FILENO, targ->fd_reader); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = fd; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Thread feeding other MultiThreading

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { char* reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { int fd = fileno( targ->fd_reader ); dup2(STDIN_FILENO, fd); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = outfile; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • Missing part of the image when taking screenshot while supporting Retina Display

    - by Spaft
    I'm currently working on enabling support for retina display for my game. In the game, we have a feature that the user can take screenshot. We are using these part of code we found online a while ago and it's working fine when we are not supporting retina display: CCDirector* director = [CCDirector sharedDirector]; CGSize size = [director winSizeInPixels]; //Create buffer for pixels GLuint bufferLength = size.width * size.height * 4; GLubyte* buffer = (GLubyte*)malloc(bufferLength); //Read Pixels from OpenGL glReadPixels(0, 100, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); //Make data provider with data. CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL); //Configure image int bitsPerComponent = 8; int bitsPerPixel = 32; int bytesPerRow = 4 * size.width; CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); uint32_t* pixels = (uint32_t*)malloc(bufferLength); CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextTranslateCTM(context, 0, size.height); CGContextScaleCTM(context, 1.0f, -1.0f); switch (director.deviceOrientation) { case CCDeviceOrientationPortrait: break; case CCDeviceOrientationPortraitUpsideDown: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180)); CGContextTranslateCTM(context, -size.width, -size.height); break; case CCDeviceOrientationLandscapeLeft: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90)); CGContextTranslateCTM(context, -size.width, 0); break; case CCDeviceOrientationLandscapeRight: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90)); CGContextTranslateCTM(context, size.width * 0.5f, -size.height); break; } CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), iref); UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; //Dealloc CGDataProviderRelease(provider); CGImageRelease(iref); CGContextRelease(context); free(buffer); free(pixels); return outputImage; But when we enabled retina display in cocos 0.99.5. This functionality is a little messed up since it will miss a little left part of the image while the high is still correct. So I'm wondering if there is anything wrong with the code or am I doing anything wrong here? Thank you in advance for any reply!

    Read the article

  • Getting level values from PCM raw data using Core Audio

    - by John
    I am trying to extract level data from a PCM audio file using core audio. I have gotten as far as (I believe) getting the raw data into a byte array (UInt8) but it is 16 bit PCM data and I am having trouble reading the data out. The input is from the iPhone microphone, which I have set as: [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt:1] forKey:AVNumberOfChannelsKey]; [recordSetting setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; which is obviously 16 bits. I am then trying to just print out a few values to see if they look reasonable for debug purposes below, and they do not look reasonable (many 0's). ExtAudioFileRef inputFile = NULL; ExtAudioFileOpenURL(track.location, &inputFile); AudioStreamBasicDescription inputFileFormat; UInt32 dataSize = (UInt32)sizeof(inputFileFormat); ExtAudioFileGetProperty(inputFile, kExtAudioFileProperty_FileDataFormat, &dataSize, &inputFileFormat); UInt8 *buffer = malloc(BUFFER_SIZE); AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0].mNumberChannels = 1; bufferList.mBuffers[0].mData = buffer; //pointer to buffer of audio data bufferList.mBuffers[0].mDataByteSize = BUFFER_SIZE; //number of bytes in the buffer while(true) { UInt32 frameCount = (bufferList.mBuffers[0].mDataByteSize / inputFileFormat.mBytesPerFrame); // Read a chunk of input OSStatus status = ExtAudioFileRead(inputFile, &frameCount, &bufferList); // If no frames were returned, conversion is finished if(0 == frameCount) break; NSLog(@"---"); int16_t *bufferl = &buffer; for(int i=0;i<100;i++){ //const int16_t *bufferl = bufferl[i]; NSLog(@"%d",bufferl[i]); } } Not sure what I am doing wrong, I think it has to do with reading the byte array. Sorry for the long code post...

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • The server returned an address in response to the PASV command that is different than the address to

    - by senzacionale
    {System.Net.WebException: The server returned an address in response to the PASV command that is different than the address to which the FTP connection was made. at System.Net.FtpWebRequest.CheckError() at System.Net.FtpWebRequest.SyncRequestCallback(Object obj) at System.Net.CommandStream.Abort(Exception e) at System.Net.FtpWebRequest.FinishRequestStage(RequestStage stage) at System.Net.FtpWebRequest.GetRequestStream() at BackupDB.Program.FTPUploadFile(String serverPath, String serverFile, FileInfo LocalFile, NetworkCredential Cred) in D:\PROJEKTI\BackupDB\BackupDB\Program.cs:line 119} code: FTPMakeDir(new Uri(serverPath + "/"), Cred); FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverPath + serverFile); request.UsePassive = true; request.Method = WebRequestMethods.Ftp.UploadFile; request.Credentials = Cred; byte[] buffer = new byte[10240]; // Read/write 10kb using (FileStream sourceStream = new FileStream(LocalFile.ToString(), FileMode.Open)) { using (Stream requestStream = request.GetRequestStream()) { int bytesRead; do { bytesRead = sourceStream.Read(buffer, 0, buffer.Length); requestStream.Write(buffer, 0, bytesRead); } while (bytesRead > 0); } response = (FtpWebResponse)request.GetResponse(); response.Close(); }

    Read the article

  • Unit testing Monorail's RenderText method

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new FooController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( foo.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • Draw to offscreen renderbuffer in OpenGL ES (iPhone)

    - by David Ensminger
    I'm trying to create an offscreen render buffer in OpenGL ES on the iPhone. I've created the buffer like this: glGenFramebuffersOES(1, &offscreenFramebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer); glGenRenderbuffersOES(1, &offscreenRenderbuffer); glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer); glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, offscreenRenderbuffer); But I'm confused on how to render the storage. Apple's documentation says to use the EAGLContext renderBufferStorage:fromDrawable: method, but this seems to only work for one render buffer (the main one being displayed). If I use the normal OpenGL function glRenderBufferStorageOES, then I can't seem to get it to display. Here's the code: // this is in the initialization section: glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGB8_OES, backingWidth, backingHeight); // and this is when I'm trying to draw to it and display it: glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer); GLfloat vc[] = { 0.0f, 0.0f, 0.0f, 10.0f, 10.0f, 10.0f, 0.0f, 0.0f, 0.0f, -10.0f, -10.0f, -10.0f, }; glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vc); glDrawArrays(GL_LINES, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glBindRenderbufferOES(GL_RENDERBUFFER_OES, offscreenRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Doing it this way, nothing is displayed on the screen. However, if I switch out the references to "offscreen...Buffer" to the buffers that were created with the renderBufferStorage method, it works fine. Any suggestions?

    Read the article

  • Poco SocketReactors for a Proxy Server

    - by Genesis
    Can anyone give me some idea of the best way to implement a non-blocking proxy server using a Poco Socket Reactor? Currently I have a blocking implementation where if a readable notification arrives from the client I am writing what is read directly to the server, and if a readable notification arrives from the server I am writing what is read directly to the client. To achieve this I keep the thread that initiated the server connection alive but I would prefer to switch to non-blocking and have any threads which are used to initiate a connection removed once the server and client sockets are registered with the reactor and the SOCKS5 handshake is over. With a SocketReactor one can register event handlers for a single socket but the trouble is I would need to store whatever is read from that socket in a global buffer until the corresponding server socket is ready to be written to as from my testing I dont seem to be able to just write directly to the server when client data arrives. I am thinking of using a struct that contains the client socket, server socket, client buffer and server buffer and whenever a writable notification comes along for either the client or server, finding the corresponding buffer and writing this. Any thoughts?

    Read the article

  • Vim script to compile TeX source and launch PDF only if no errors

    - by Jeet
    Hi, I am switching to using Vim for for my LaTeX editing environment. I would like to be able to tex the source file from within Vim, and launch an external viewing if the compile was successful. I know about the Vim-Latex suite, but, if possible, would prefer to avoid using it: it is pretty heavy-weight, hijacks a lot of my keys, and clutters up my vimruntime with a lot of files. Here is what I have now: if exists('b:tex_build_mapped') finish endif " use maparg or mapcheck to see if key is free command! -buffer -nargs=* BuildTex call BuildTex(0, <f-args>) command! -buffer -nargs=* BuildAndViewTex call BuildTex(1, <f-args>) noremap <buffer> <silent> <F9> <Esc>:call BuildTex(0)<CR> noremap <buffer> <silent> <S-F9> <Esc>:call BuildTex(1)<CR> let b:tex_build_mapped = 1 if exists('g:tex_build_loaded') finish endif let g:tex_build_loaded = 1 function! BuildTex(view_results, ...) write if filereadable("Makefile") " If Makefile is available in current working directory, run 'make' with arguments echo "(using Makefile)" let l:cmd = "!make ".join(a:000, ' ') echo l:cmd execute l:cmd if a:view_results && v:shell_error == 0 call ViewTexResults() endif else let b:tex_flavor = 'pdflatex' compiler tex make % if a:view_results && v:shell_error == 0 call ViewTexResults() endif endif endfunction function! ViewTexResults(...) if a:0 == 0 let l:target = expand("%:p:r") . ".pdf" else let l:target = a:1 endif if has('mac') execute "! open -a Preview ".l:target endif endfunction The problem is that v:shell_error is not set, even if there are compile errors. Any suggestions or insight on how to detect whether a compile was successful or not would be greatly appreciated! Thanks!

    Read the article

  • Java / Groovy Socket - Detecting the socket being closed in a non-blocking way

    - by John Arrowwood
    I'm trying to create a small HTTP proxy that can re-write the request/headers as needed to suit my requirements. If one already exists, please, point me to it. Otherwise... I've written something that ALMOST works. It can do the proxy function, but not the re-write (yet). Problem is, I can't detect when the remote socket has been closed down without doing a blocking read. It is CRITICAL for the functionality of this thing that it be able to detect the socket being closed without blocking. I have SCOURED the Java API documentation, and I can't find ANY indication that it is even possible. Here's what I have: while ( this.inbound.isConnected() && this.outbound.isConnected() ) { try { while ( ( available = readFromClient.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromClient.read( buffer, 0, available ) writeToServer.write( buffer, 0, bytesRead ) } while ( ( available = readFromServer.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromServer.read( buffer, 0, available ) writeToClient.write( buffer, 0, bytesRead ) } } catch (e) { print e } println "Connected: " + this.inbound.isConnected() println "Bound: " + this.inbound.isBound() println "InputShutdown: " + this.inbound.isInputShutdown() println "OutputShutdown: " + this.inbound.isOutputShutdown() print "\n"; Thread.sleep( 10 ) } The tests for the socket being closed never indicate that the socket was closed. And, as I mentioned, I can't find ANY examples of how to detect the 'END OF FILE' condition on the stream without doing a blocking read. There HAS to be a way. Does anyone here know what it is?

    Read the article

  • BN_hex2bn magicaly segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Manipulating columns of numbers in elisp

    - by ~unutbu
    I have text files with tables like this: Investment advisory and related fees receivable (161,570 ) (71,739 ) (73,135 ) Net purchases of trading investments (93,261 ) (30,701 ) (11,018 ) Other receivables 61,216 (10,352 ) (69,313 ) Restricted cash 20,658 (20,658 ) - Other current assets (39,643 ) 14,752 64 Other non-current assets 71,896 (26,639 ) (26,330 ) Since these are accounting numbers, parenthesized numbers indicate negative numbers. Dashes represent 0 or no number. I'd like to be able to mark a rectangular region such as third column above, call a function (format-column), and automatically have (-73135-11018-69313+64-26330)/1000 sitting in my kill-ring. Even better would be -73.135-11.018-69.313+0.064-26.330 but I couldn't figure out a way to transform 64 -- 0.064. This is what I've come up with: (defun format-column () "format accounting numbers in a rectangular column. format-column puts the result in the kill-ring" (interactive) (let ((p (point)) (m (mark)) ) (copy-rectangle-to-register 0 (min m p) (max m p) nil) (with-temp-buffer (insert-register 0) (goto-char (point-min)) (while (search-forward "-" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "," nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward ")" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "(" nil t) (replace-match "-" nil t) (just-one-space) (delete-backward-char 1) ) (goto-char (point-min)) (while (search-forward "\n" nil t) (replace-match " " nil t)) (goto-char (point-min)) (kill-new (mapconcat 'identity (split-string (buffer-substring (point-min) (point-max))) "+")) (kill-region (point-min) (point-max)) (insert "(") (yank 2) (goto-char (point-min)) (while (search-forward "+-" nil t) (replace-match "-" nil t)) (goto-char (point-max)) (insert ")/1000") (kill-region (point-min) (point-max)) ) ) ) (global-set-key "\C-c\C-f" 'format-column) Although it seems to work, I'm sure this function is poorly coded. The repetitive calls to goto-char, search-forward, and replace-match and the switching from buffer to string and back to buffer seems ugly and inelegant. My entire approach may be wrong-headed, but I don't know enough elisp to make this more beautiful. Do you see a better way to write format-column, and/or could you make suggestions on how to improve this code?

    Read the article

  • Connection aborted.

    - by Pinu
    I am getting this error when i am trying to upload a file of 3mb or more on my WCF client application. SocketException (0x2745): An established connection was aborted by the software in your host machine] System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) +73 System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +131 [IOException: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +294 System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) +26 System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) +297 [WebException: The underlying connection was closed: An unexpected error occurred on a receive.] System.Net.HttpWebRequest.GetResponse() +5314029 System.ServiceModel.Channels.HttpChannelRequest.WaitForReply(TimeSpan timeout) +54 [CommunicationException: An error occurred while receiving the HTTP response to http://localhost:4649/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.] System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) +7596735 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) +275 SmartConnectClient.SmartConnect.IService1.OrderCertMail(OrderCertMailResponse OrderCertMail1) +0 SmartConnectClient.SmartConnect.Service1Client.OrderCertMail(OrderCertMailResponse OrderCertMail1) in c:\documents and settings\pkale\my documents\visual studio 2008\projects\smartconnectclient\smartconnectclient\service references\smartconnect\reference.cs:1939 SmartConnectClient.Test_CertMail_Order.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\pkale\My Documents\Visual Studio 2008\Projects\SmartConnectClient\SmartConnectClient\Test_CertMail_Order.aspx.cs:40 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 enter code here

    Read the article

  • Getting a binary file from resource in C#

    - by Jesse Knott
    Hello, I am having a bit of a problem, I am trying to get a PDF as a resource in my application. At this point I have a fillable PDF that I have been able to store as a file next to the binary, but now I am trying to embed the PDF as a resource in the binary. byte[] buffer; try { s = typeof(BattleTracker).Assembly.GetManifestResourceStream("libReports.Resources.DAForm1594.pdf"); buffer = new byte[s.Length]; int read = 0; do { read = s.Read(buffer, read, 32768); } while (read > 0); } catch (Exception e) { throw new Exception("Error: could not import report:", e); } // read existing PDF document PdfReader r = new PdfReader( // optimize memory usage buffer, null ); Every time I run the code I get an error saying "Rebuild trailer not found. Original Error: PDF startxref not found" When I was just opening the file via a path to the static file in my directory it worked fine. I have tried using different encodings UTF-8, UTF-32, UTF-7, ASCII, etc etc.... As a side note I had the same problem with getting a Powerpoint file as a resource, I was finally able to fix that problem by converting the Powerpoint to xml and using that. I have considered doing the same for the PDF but I am referencing elements by field name which does not seem to work with XML PDFs. Can anyone help me out with this? Thanks in advance!

    Read the article

  • Why does my Sax Parser produce no results after using InputStream Read?

    - by Andy Barlow
    Hello, I have this piece of code which I'm hoping will be able to tell me how much data I have downloaded (and soon put it in a progress bar), and then parse the results through my Sax Parser. If I comment out basically everything above the //xr.parse(new InputSource(request.getInputStream())); line and swap the xr.parse's over, it works fine. But at the moment, my Sax parser tells me I have nothing. Is it something to do with is.read (buffer) section? Also, just as a note, request is a HttpURLConnection with various signatures. /*Input stream to read from our connection*/ InputStream is = request.getInputStream(); /*we make a 2 Kb buffer to accelerate the download, instead of reading the file a byte at once*/ byte [ ] buffer = new byte [ 2048 ] ; /*How many bytes do we have already downloaded*/ int totBytes,bytes,sumBytes = 0; totBytes = request.getContentLength () ; while ( true ) { /*How many bytes we got*/ bytes = is.read (buffer); /*If no more byte, we're done with the download*/ if ( bytes <= 0 ) break; sumBytes+= bytes; Log.v("XML", sumBytes + " of " + totBytes + " " + ( ( float ) sumBytes/ ( float ) totBytes ) *100 + "% done" ); } /* Parse the xml-data from our URL. */ // OLD, and works if comment all the above //xr.parse(new InputSource(request.getInputStream())); xr.parse(new InputSource(is)) /* Parsing has finished. */; Can anyone help me at all?? Kind regards, Andy

    Read the article

  • GZIP Java vs .NET

    - by Jim Jones
    Using the following Java code to compress/decompress bytes[] to/from GZIP. First text bytes to gzip bytes: public static byte[] fromByteToGByte(byte[] bytes) { ByteArrayOutputStream baos = null; try { ByteArrayInputStream bais = new ByteArrayInputStream(bytes); baos = new ByteArrayOutputStream(); GZIPOutputStream gzos = new GZIPOutputStream(baos); byte[] buffer = new byte[1024]; int len; while((len = bais.read(buffer)) >= 0) { gzos.write(buffer, 0, len); } gzos.close(); baos.close(); } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Then the method that goes the other way compressed bytes to uncompressed bytes: public static byte[] fromGByteToByte(byte[] gbytes) { ByteArrayOutputStream baos = null; ByteArrayInputStream bais = new ByteArrayInputStream(gbytes); try { baos = new ByteArrayOutputStream(); GZIPInputStream gzis = new GZIPInputStream(bais); byte[] bytes = new byte[1024]; int len; while((len = gzis.read(bytes)) > 0) { baos.write(bytes, 0, len); } } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Think there is any effect since I'm not writing out to a gzip file? Also I noticed that in the standard C# function that BitConverter reads the first four bytes and then the MemoryStream Write function is called with a start point of 4 and a length of input buffer length - 4. So is that effect the validity of the header? Jim

    Read the article

  • Call dll - pcshll32.dll using delphi

    - by Davis
    Hi, I need to call hllapi function of pcshll32.dll using delphi. It's works with personal communications of ibm. How can i change the code bellow to delphi ? Thanks !!! The EHLLAPI entry point (hllapi) is always called with the following four parameters: EHLLAPI Function Number (input) Data Buffer (input/output) Buffer Length (input/output) Presentation Space Position (input); Return Code (output) The prototype for IBM Standard EHLLAPI is: [long hllapi (LPWORD, LPSTR, LPWORD, LPWORD); The prototype for IBM Enhanced EHLLAPI is: [long hllapi (LPINT, LPSTR, LPINT, LPINT); Each parameter is passed by reference not by value. Thus each parameter to the function call must be a pointer to the value, not the value itself. For example, the following is a correct example of calling the EHLLAPI Query Session Status function: #include "hapi_c.h" struct HLDQuerySessionStatus QueryData; int Func, Len, Rc; long Rc; memset(QueryData, 0, sizeof(QueryData)); // Init buffer QueryData.qsst_shortname = ©A©; // Session to query Func = HA_QUERY_SESSION_STATUS; // Function number Len = sizeof(QueryData); // Len of buffer Rc = 0; // Unused on input hllapi(&Func, (char *)&QueryData, &Len, &Rc); // Call EHLLAPI if (Rc != 0) { // Check return code // ...Error handling } All the parameters in the hllapi call are pointers and the return code of the EHLLAPI function is returned in the value of the 4th parameter, not as the value of the function.

    Read the article

  • GZipStream not reading the whole file

    - by Ed
    I have some code that downloads gzipped files, and decompresses them. The problem is, I can't get it to decompress the whole file, it only reads the first 4096 bytes and then about 500 more. Byte[] buffer = new Byte[4096]; int count = 0; FileStream fileInput = new FileStream("input.gzip", FileMode.Open, FileAccess.Read, FileShare.Read); FileStream fileOutput = new FileStream("output.dat", FileMode.Create, FileAccess.Write, FileShare.None); GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress, true); // Read from gzip steam while ((count = gzipStream.Read(buffer, 0, buffer.Length)) > 0) { // Write to output file fileOutput.Write(buffer, 0, count); } // Close the streams ... I've checked the downloaded file; it's 13MB when compressed, and contains one XML file. I've manually decompressed the XML file, and the content is all there. But when I do it with this code, it only outputs the very beginning of the XML file. Anyone have any ideas why this might be happening?

    Read the article

  • AudioFileWriteBytes fails with error code -40

    - by alexbw
    I'm trying to write raw audio bytes to a file using AudioFileWriteBytes(). Here's what I'm doing: void writeSingleChannelRingBufferDataToFileAsSInt16(AudioFileID audioFileID, AudioConverterRef audioConverter, ringBuffer *rb, SInt16 *holdingBuffer) { // First, figure out which bits of audio we'll be // writing to file from the ring buffer UInt32 lastFreshSample = rb->lastWrittenIndex; OSStatus status; int numSamplesToWrite; UInt32 numBytesToWrite; if (lastFreshSample < rb->lastReadIndex) { numSamplesToWrite = kNumPointsInWave + lastFreshSample - rb->lastReadIndex - 1; } else { numSamplesToWrite = lastFreshSample - rb->lastReadIndex; } numBytesToWrite = numSamplesToWrite*sizeof(SInt16); Then we copy the audio data (stored as floats) to a holding buffer (SInt16) that will be written directly to the file. The copying looks funky because it's from a ring buffer. UInt32 buffLen = rb->sizeOfBuffer - 1; for (int i=0; i < numSamplesToWrite; ++i) { holdingBuffer[i] = rb->data[(i + rb->lastReadIndex) & buffLen]; } Okay, now we actually try to write the audio from the SInt16 buffer "holdingBuffer" to the audio file. The NSLog will spit out an error -40, but also claims that it's writing bytes. No data is written to file. status = AudioFileWriteBytes(audioFileID, NO, 0, &numBytesToWrite, &holdingBuffer); rb->lastReadIndex = lastFreshSample; NSLog(@"Error = %d, wrote %d bytes", status, numBytesToWrite); return;

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >