Search Results

Search found 11231 results on 450 pages for 'bad alloc'.

Page 12/450 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • How to debug anomalous C memory/stack problems

    - by EBM
    Hello, Sorry I can't be specific with code, but the problems I am seeing are anomalous. Character string values seem to be getting changed depending on other, unrelated code. For example, the value of the argument that is passed around below will change merely depending on if I comment out one or two of the fprintf() calls! By the last fprintf() the value is typically completely empty (and no, I have checked to make sure I am not modifying the argument directly... all I have to do is comment out a fprintf() or add another fprintf() and the value of the string will change at certain points!): static process_args(char *arg) { /* debug */ fprintf(stderr, "Function arg is %s\n", arg); ...do a bunch of stuff including call another function that uses alloc()... /* debug */ fprintf(stderr, "Function arg is now %s\n", arg); } int main(int argc, char *argv[]) { char *my_arg; ... do a bunch of stuff ... /* just to show you it's nothing to do with the argv array */ my_string = strdup(argv[1]); /* debug */ fprintf(stderr, "Argument 1 is %s\n", my_string); process_args(my_string); } There's more code all around, so I can't ask for someone to debug my program -- what I want to know is HOW can I debug why character strings like this are getting their memory changed or overwritten based on unrelated code. Is my memory limited? My stack too small? How do I tell? What else can I do to track down the issue? My program isn't huge, it's like a thousand lines of code give or take and a couple dynamically linked external libs, but nothing out of the ordinary. HELP! TIA!

    Read the article

  • bad_alloc occuring when allocating small structs

    - by SalamiArmi
    A bad_alloc has started showing up in some code which looks perfectly valid to me and has worked very well in the past. The bad alloc only occurs once every 50-3000 iterations of the code, which is also confusing. The code itself is from a singly linked list, simply adding a new element to the queue: template<typename T> struct container { inline container() : next(0) {} container *next; T data; }; void push(const T &data) { container<T> *newQueueMember = new container<T>; //... unrelated to crash } Where T is: struct test { int m[256]; }; Changing the size of the array allocated array to anything but very small values (1-8 ints) still results in a bad_alloc occasionally. A few extra notes about my program: - I used Poco::ThreadPool to thread my program. I've only recently added this functionality, before I had it running with Win32 threads. However, only the main thread ever calls push(). - I am also occasionally getting other crashes which could be related. However, when I try to debug with visual studio 2008, I can't navigate back to the call stack, or the crash happens deep within new(). Thanks in advance.

    Read the article

  • template style matrix implementation in c

    - by monkeyking
    From time to time I use the following code for generating a matrix style datastructure typedef double myType; typedef struct matrix_t{ |Compilation started at Mon Apr 5 02:24:15 myType **matrix; | size_t x; |gcc structreaderGeneral.c -std=gnu99 -lz size_t y; | }matrix; |Compilation finished at Mon Apr 5 02:24:15 | | matrix alloc_matrix(size_t x, size_t y){ | if(0) | fprintf(stderr,"\t-> Alloc matrix with dim (%lu,%lu) byteprline=%lu bytetotal:%l\| u\n",x,y,y*sizeof(myType),x*y*sizeof(myType)); | | myType **m = (myType **)malloc(x*sizeof(myType **)); | for(size_t i=0;i<x;i++) | m[i] =(myType *) malloc(y*sizeof(myType *)); | | matrix ret; | ret.x=x; | ret.y=y; | ret.matrix=m; | return ret; | } And then I would change my typedef accordingly if I needed a different kind of type for the entries in my matrix. Now I need 2 matrices with different types, an easy solution would be to copy/paste the code, but is there some way to do a template styled implementation. Thanks

    Read the article

  • DBan not working because disk has bad sectors?

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • why does chkdsk always report errors on a bad shutdown

    - by rep_movsd
    Once in a while, Windows XP hangs on my laptop (usually when going into standby or hibernate and occasionally on startup) and I have to forcefully poweroff. Ususally chkdsk never runs automatically (I thought it should know that the partitions have nit been unmounted and do that). I religiously run chkdsk without /F after bad shutdowns like this, and invariably it reports that the drive has unfixed errors and must be checked with /F and I do that, and more often than not, the chdsk that runs on startup does not report fixing anything. I have had times in the past (and not only just on this system) when not running chkdsk leads to some strange errors like files not opening even though they exist and inability to save certain files, so I make it a point to always chkdsk after bad shutdown. I never understood why this is : Isnt the whole point of a journalling filesystem like NTFS to avoid file system corruption and endless chkdsks? I even tried once disabling write caching to see if it made any difference, but to no avail....

    Read the article

  • Will disk cloning resolve bad stripes on RAID?

    - by user13323
    Hi. We have a logical RAID1 drive in bad stripes state, which kept that status even after replacement and rebuilding of both drives, and gives errors in Windows logs about failure of writing to disk. IBM support suggests erasing and re-creating the RAID, then re-installing the Windows. The resulting down-time unacceptible for us, so we want to clone the RAID (via Acronis True Image), erase and re-create the RAID, then dump the cloned data back. Following IBM logic where RAID erasing and re-creation resets the whole RAID meta-data, this should clear the bad-stripes status, and start from a blank page. Question is if such strategy is possible, and will produce the desired effect? Any idea is appreciated - thanks in advance!

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • What are the strategies available to minimise badblocks on an encrypted partition?

    - by David Andreoletti
    Let me explain my backup strategy and the problem I am facing. My current backup strategy: Open encrypted container and execute Carbon Copy Cleaner on it at least once a week. Rotate backup disks. Problem: I have an Truecrypt partition on my 1st external hard disk. I recently found out that some files on this encrypted partition cannot be read due to bad blocks (reported by Antonio Diaz's GNU 'ddrescue'). My backup strategy is ineffective in this scenario because bad blocks are discovered during backup. Possible strategy Strategy #0: Have the encrypted partition over a RAID 1 with 2 disks. Is this a suitable strategy ? Strategy #1: Do you think of any other one ? Environment: Mac OS X 10.8 External 2.5" hard disk (SATA) No RAID

    Read the article

  • Most awesomely bad hack

    - by Zypher
    As I sit watching one of my latest dirty dirty hacks run, I started wondering what kind of dirty hacks you have created that are so bad they are awesome. We all have a few of them in our past - and they are probably still running in production somewhere, chugging along somehow still working. Which reminds me of the hack we had to put into place when we were moving data centers. Our IVRs had to keep running, as the data center we were moving from was the primary DC, and the new Primary wasn't quite ready to take traffic. So what do we do. Well we answer the calls in DC1, then ship the sip stream over the internet to DC2 1900 miles away ... that just felt oh so wrong. So the question is, what is one (or more) of your awesomely bad hacks?

    Read the article

  • Is zip's encryption really bad?

    - by Nifle
    The standard advice for many years regarding compression and encryption has been that the encryption strength of zip is bad. Is this really the case in this day and age? I read this article about WinZip (it has had the same bad reputation). According to that article the problem is removed provided you follow a few rules when choosing your password. At least 12 characters in length Be random not contain any dictionary, common words or names At least one Upper Case Character Have at least one Lower Case Character Have at least one Numeric Character Have at least one Special Character e.g. $,£,*,%,&,! This would result in roughly 475,920,314,814,253,000,000,000 possible combinations to brute force Please provide recent (say past five years) links to back up your information.

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • Recovering from bad ownership

    - by Christian Sciberras
    I was going to change the ownership of a directory to apache:apache, but I ended up running: chown -R apache:apache / Bad! Very bad! I knew what was going on when it started saying: chown: changing ownership of `/proc/2694/fd/48': Permission denied That's when I stopped everything (Ctrl+C). The current system I have is a server running virtualbox running CentOS 5. This problem happened inside the VM. Currently, everything seems to be working, but I have not restarted the system yet, and to be honest, I'm afraid that if I did something will break. I do not know chown's order, should I be concerned and assume something will break after a reboot? Is there a way to recover form this problem without having to rely on backups? I do have a daily one, but I thought there may be a simpler way out.

    Read the article

  • What are good and bad jitter times for a LAN

    - by garyb32234234
    Ive just ran jperf (frontend to iperf) on our network between 2 workstations, its recorded jitter between 0.033ms and 0.048ms. Is this good or bad? Are there more variables that i would need to consider to make the decision? EDIT: TCP/IP Ethernet LAN 43 PCs 1 server, 100Mbits main switch, various small 8 port switches, test was done using UDP, Its a Windows Domain. I want to instal a few voip softphones on the workstations, see how many i can use that reliably work, im testing a few different workstations around the network to see where the best quality network paths are. Will also change some equipment if i identify bad connections.

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Why is nesting or piggybacking errors within errors bad in general?

    - by dietbuddha
    Why is nesting or piggybacking errors within errors bad in general? To me it seems bad intuitively, but I'm suspicious in that I cannot adequately articulate why it is bad. This may be because it is not in general bad and that it is only bad in specific instances. Why is it detrimental to design error/exception handling in such a way. The specific instance is that of a REST service. There is a desire by some to use http errors (specifically the 500 response) as a way to indicate any problem with specific instances of a resource. An example of an instance resource in this case would be: http://server/ticket/80 # instance http://server/ticket # not an instance So this is the behavior that is being proposed. If ticket 80 does not exist return a http response code of 500. Within the body of the error return the "real" error as an additional error code and description. If the ticket resource doesn't exist return a response code of 404.

    Read the article

  • I get EXC_BAD_ACCESS when MaxConcurrentOperationCount > 1

    - by kudorgyozo
    Hello i am using NSOperationQueue to download images in the background. I have created a custom NSOperation to download the images. I put the images in table cells. The problem is if I do [operationQueue setMaxConcurrentOperationCount: 10] and i scroll down several cells the program crashes with EXC_BAD_ACCESS. Every time it crashes at the same place in the table. There are 3 cells one after the other which are for the same company and have the same logo so basically it should download the images 3 times. Every other time it works fine. - (void) main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSURL *url = [[NSURL alloc] initWithString:self.imageURL]; debugLog(@"downloading image: %@", self.imageURL); //NSError *error = nil; NSData *data = [[NSData alloc] initWithContentsOfURL:url]; [url release]; UIImage *image = [[UIImage alloc] initWithData:data]; [data release]; if (image) { if (image.size.width != ICONWIDTH && image.size.height != ICONHEIGHT) { UIImage *resizedImage; CGSize itemSize = CGSizeMake(ICONWIDTH, ICONHEIGHT); UIGraphicsBeginImageContext(itemSize); CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height); [image drawInRect:imageRect]; resizedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); self.theImage = resizedImage; } else { self.theImage = image; } [image release]; } [delegate didFinishDownloadingImage: self]; [pool release]; } This is how i handle downloading the images. If i comment out [delegate didFinishDownloadingImage: self]; in the function above it doesn't crash but of course it is useless. -(void) didFinishDownloadingImage:(ImageDownloadOperation *) imageDownloader { [self performSelectorOnMainThread: @selector(handleDidFinishDownloadingImage:) withObject: imageDownloader waitUntilDone: FALSE]; } -(void) handleDidFinishDownloadingImage:(ImageDownloadOperation *)imageDownloadOperation { NSArray *visiblePaths = [self.myTableView indexPathsForVisibleRows]; CompanyImgDownloaderState *stateObject = (CompanyImgDownloaderState *)[imageDownloadOperation stateObject]; if ([visiblePaths containsObject: stateObject.indexPath]) { //debugLog(@"didFinishDownloadingImage %@ %@", imageDownloader.theImage); UITableViewCell *cell = [self.myTableView cellForRowAtIndexPath: stateObject.indexPath]; UIImageView *imageView = (UIImageView *)[cell viewWithTag: 1]; if (imageDownloadOperation.theImage) { imageView.image = imageDownloadOperation.theImage; stateObject.company.icon = imageDownloadOperation.theImage; } else { imageView.image = [(TestWebServiceAppDelegate *)[[UIApplication sharedApplication] delegate] getCylexIcon]; stateObject.company.icon = [(TestWebServiceAppDelegate *)[[UIApplication sharedApplication] delegate] getCylexIcon]; } } }

    Read the article

  • AVAudioPlayer crash after playing from an AVAudioRecorder

    - by munchine
    I've got a button the user tap to start recording and tap again to stop. When it stop I want the recorded voice 'echo' back so the user can hear what was recorded. This works fine the first time. If I hit the button for the third time, it starts a new recording and when I hit stop it crashes with EXC_BAD_ACCESS. - (IBAction) readToMeTapped { if(recording) { recording = NO; [readToMeButton setTitle:@"Stop Recording" forState: UIControlStateNormal ]; NSMutableDictionary *recordSetting = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; recordedTmpFile = [NSURL fileURLWithPath:[[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]]; error = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error]; [recordSetting release]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [error domain], [error code], [[error userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [error localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } NSLog(@"Using File called: %@",recordedTmpFile); //Setup the recorder to use this file and record to it. [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder recordForDuration:(NSTimeInterval) 5]; //recording for a limited time } else { // it crashes the second time it gets here! recording = YES; NSLog(@"Recording YES Using File called: %@",recordedTmpFile); [readToMeButton setTitle:@"Start Recording" forState:UIControlStateNormal ]; [recorder stop]; //Stop the recorder. //playback recording AVAudioPlayer * newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error]; [recordedTmpFile release]; self.aPlayer = newPlayer; [newPlayer release]; [aPlayer setDelegate:self]; [aPlayer prepareToPlay]; [aPlayer play]; } } - (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)sender successfully:(BOOL)flag { NSLog (@"audioRecorderDidFinishRecording:successfully:"); [recorder release]; recorder = nil; } Checking the debugger, it flags the error here @synthesize aPlayer, recorder; This is the part I don't understand. I thought it may have something to do with releasing memory but I've been careful. Have I missed something?

    Read the article

  • AVAudioPlayer crash after playing from an AVAudioRecord

    - by munchine
    I've got a button the user tap to start recording and tap again to stop. When it stop I want the recorded voice 'echo' back so the user can hear what was recorded. This works fine the first time. If I hit the button for the third time, it starts a new recording and when I hit stop it crashes with EXC_BAD_ACCESS. - (IBAction) readToMeTapped { if(recording) { recording = NO; [readToMeButton setTitle:@"Stop Recording" forState: UIControlStateNormal ]; NSMutableDictionary *recordSetting = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; recordedTmpFile = [NSURL fileURLWithPath:[[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]]; error = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [error domain], [error code], [[error userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [error localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } NSLog(@"Using File called: %@",recordedTmpFile); //Setup the recorder to use this file and record to it. [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder recordForDuration:(NSTimeInterval) 5]; //recording for a limited time } else { // it crashes the second time it gets here! recording = YES; NSLog(@"Recording YES Using File called: %@",recordedTmpFile); [readToMeButton setTitle:@"Start Recording" forState:UIControlStateNormal ]; [recorder stop]; //Stop the recorder. //playback recording AVAudioPlayer * newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error]; [recordedTmpFile release]; self.aPlayer = newPlayer; [newPlayer release]; [aPlayer setDelegate:self]; [aPlayer prepareToPlay]; [aPlayer play]; } } - (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)sender successfully:(BOOL)flag { NSLog (@"audioRecorderDidFinishRecording:successfully:"); [recorder release]; recorder = nil; } Checking the debugger, it flags the error here @synthesize aPlayer, recorder; This is the part I don't understand. I thought it may have something to do with releasing memory but I've been careful. Have I missed something?

    Read the article

  • MySQL SSL: bad other signature confirmation

    - by samJL
    I am trying to enable SSL connections for MySQL-- SSL will show as enabled in MySQL, but I can't make any connections due to this error: ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation I am running the following: Ubuntu Version: 14.04.1 LTS (GNU/Linux 3.13.0-34-generic x86_64) MySQL Version: 5.5.38-0ubuntu0.14.04.1 OpenSSL Version: OpenSSL 1.0.1f 6 Jan 2014 I used these commands to generate my certificates (all generated in /etc/mysql): openssl genrsa -out ca-key.pem 2048 openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem -subj "/C=US/ST=NY/O=MyCompany/CN=ca" openssl req -newkey rsa:2048 -nodes -days 3650 -keyout server-key.pem -out server-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=server" openssl rsa -in server-key.pem -out server-key.pem openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem openssl req -newkey rsa:2048 -nodes -days 3650 -keyout client-key.pem -out client-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=client" openssl rsa -in client-key.pem -out client-key.pem openssl x509 -req -in client-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem I put the following in my.cnf: [mysqld] ssl-ca=/etc/mysql/ca-cert.pem ssl-cert=/etc/mysql/server-cert.pem ssl-key=/etc/mysql/server-key.pem When I attempt to connect specifying the client certificates-- I get the following error: mysql -uroot -ppassword --ssl-ca=/etc/mysql/ca-cert.pem --ssl-cert=/etc/mysql/client-cert.pem --ssl-key=/etc/mysql/client-key.pem ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation If I connect without SSL, I can see that MySQL has correctly loaded the certificates: mysql -uroot -ppassword --ssl=false mysql> SHOW VARIABLES LIKE '%ssl%'; +---------------+----------------------------+ | Variable_name | Value | +---------------+----------------------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | /etc/mysql/ca-cert.pem | | ssl_capath | | | ssl_cert | /etc/mysql/server-cert.pem | | ssl_cipher | | | ssl_key | /etc/mysql/server-key.pem | +---------------+----------------------------+ 7 rows in set (0.00 sec) My generated certificates pass OpenSSL verification and modulus: openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem server-cert.pem: OK client-cert.pem: OK What am I missing? I used this same process before on a different server and it worked- however the Ubuntu version was 12.04 LTS and the OpenSSL version was older (don't remember specifically). Has something changed with the latest OpenSSL? Any help would be appreciated!

    Read the article

  • another "SSH connect to host github.com port 22: Bad file number"

    - by Mariusz
    Hello. I have a problem with my first-time ssh connection. Yes, I've already done yours guides, already tried your "Dealing with firewalls and proxies" article and the problem is still occuring. I am using Win7 32bit, Windows Firewall is disabled, haven't any third-party firewalls, ESET Nod32 Antivirus is not blocking any ports, I am not using any PROXY (neither local proxy) . Here goes the logs: Ordinary SSH connection try C:\Users\Mariusz>ssh -vvv [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: connect to address 207.97.227.239 port 22: Not owner ssh: connect to host github.com port 22: Bad file number NCAT connection try C:\Users\Mariusz>ncat github.com 22 Strange connect error from 207.97.227.239 (10013): No error 10013 = WSAEACCES I think that method called "smart-http-support" won't be usable for me because I haven't created repo yet. I have just GIT INIT locally, and finished at step GIT PUSH, which returns the same: ssh: connect to host github.com port 22: Bad file number fatal: The remote end hung up unexpectedly corkscrew method (first article from yours guide) . While PUTTYing (with pageant in bg), after inputing login - an error is occuring (MessageBox): Disconnected: No supported authentication methods available And in terminal such message is printing out: Server refused our key Key I have generated correctly, using ssh-keygen. I tried not method by editing ~/.ssh/config yet because I had thought that because I haven't PUSHed anything to my remote repo so I won't be able to CLONE anything. Method called ssh-forwarding is not for my, cause it "requires access to an external ssh server" and I haven't any at this time. What else could I do? Thanks in advance for any help. Mariusz.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >