Search Results

Search found 4735 results on 190 pages for 'handling interruptions'.

Page 19/190 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Handling exceptions in Prism 4 modules

    - by marcellscarlett
    I've gone through a number of threads here about this topic with no success. It seems that in the App.xaml.cs of our WPF application, handling DispatcherUnhandledExceptions and AppDomain.CurrentDomain.UnhandledException don't catch everything. In this specific instance we have 7 prism modules. My exception handling code is below (almost identical for UnhandledException): private void OnDispatcherUnhandledException(object sender, DispatcherUnhandledExceptionEventArgs e) { try { var ex = e.Exception.InnerException ?? e.Exception; var logManager = new LogManager(); logManager.Error(ex); if (!EventLog.SourceExists(ex.Source)) EventLog.CreateEventSource(ex.Source, "AppName"); EventLog.WriteEntry(ex.Source, ex.InnerException.ToString()); var emb = new ExceptionMessageBox(ex); emb.ShowDialog(); e.Handled = true; } catch (Exception) { } } The problem seems to be the unhandled exceptions occurring in the modules. They aren't caught by the code above and they cause the application to crash without logging or displaying any sort of message. Does anyone have experience with this?

    Read the article

  • Windows Server 2003 - Handling hundreds of simultaneous downloads

    - by Paul Hinett
    At the moment I have a single server with 4 1TB hard disks, daily I haver over 150 MP3 music files uploaded (around 80mb each). At busy periods there is over 300 people streaming / downloading these mixes all at once, 75% of the activity is on the most recently uploaded stuff which is all on a single hard disk. My read speads on the hard disk are very low due to such high activity of 200+ reads all happening at the same time on a single hard disk (ran some tests with HDTach). What would be a logical solution to solve this, a couple of ideas I had are: Load balance with another server Install faster hard disks (what are best these days? SCSI / SATA) Spread the most accessed files over the 4 drives so it is sharing the load between all 4 disks, instead of all the most accessed (most recent) all on the most recently installed drive. Obviouslly load balance is the most expensive option, but would it dramatically help? Some help on this situation would be great!

    Read the article

  • Handling bounced email when using a postfix smarthost

    - by Mark Rose
    I'm running a high availability cluster, and so far, most things work great. I have two external machines that act as outgoing mail hosts (smarthosts). The internal hosts are configured to relay all email through these two external facing hosts. My smarthosts' main.cf looks like this: myhostname = lb1.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = lb1.example.com, localhost relayhost = mynetworks = 127.0.0.0/8 10.1.248.0/24 My internal hosts' main.cf looks like this: mynetworks = 127.0.0.0/8 myhostname = web1.example.com mydestination = $myhostname, localhost.$mydomain, localhost relayhost = [10.1.248.3] smtp_fallback_relay = [10.1.248.2] lb1's internal IP is 10.1.248.2, and lb2's internal IP is 10.1.248.3. On the external hosts, email for root and www-data is forwarded to [email protected] with /etc/aliases. One advantage to using the smarthost setup is that spam filters and the like can connect back to the sending sending server. All email is sent fine, and headers look like this: Received: from lb2.example.com ([198.51.100.3]) by mx.google.com with ESMTP id y17si1571259icb.76.2011.01.13.18.20.32; Thu, 13 Jan 2011 18:20:32 -0800 (PST) Received-SPF: neutral (google.com: 198.51.100.3 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=198.51.100.3; Received: from db1.example.com (unknown [10.1.248.20]) by lb2.example.com (Postfix) with ESMTP id D364823C0BE for <[email protected]>; Thu, 13 Jan 2011 21:20:31 -0500 (EST) Received: by db1.example.com (Postfix) id C9FA7760D6A; Thu, 13 Jan 2011 21:20:31 -0500 (EST) Delivered-To: www-data@localhost Received: by db1.example.com (Postfix, from userid 0) id C1632760D6C; Thu, 13 Jan 2011 21:20:31 -0500 (EST) The problem is bounced/reject email. The external machine tries to forward the email back to the internal machine, e.g. www-data on web1 sending an email that bounces (such as a user signing up with a bad email address). An additional complication is using Google mail for the main example.com domain. In lieu of specifying every internal host in the external hosts' mydestination, is there a better way of setting things up, keeping in mind I can't adjust touch the mx for example.com?

    Read the article

  • Handling email bounces

    - by Roland
    I have a website with approx 60 000 registered users, now the server sends out emails to these users eg Birthday mailers etc. Now the problem is I get a lot of bounces. Is there a way to manage these bounces, email addresses that do not exist anymore to capture them. I'm running Centos.

    Read the article

  • email handling with inbox.py and nginx

    - by Matt Ball
    I have a Flask web application running behind gunicorn and Nginx. Nginx proxies any traffic to ivrhub.org to the correct flask app. I would very much like to use inbox.py to process some incoming email. Running inbox.py's example on my server and then sending an email to [email protected] does not work as I intended. The inbox.py server does not seem to receive anything but the email also does not bounce. I'm missing something conceptually -- is there a DNS setting I need to configure or something I need to adjust with Nginx?

    Read the article

  • Handling Junk Email with Apple Mail.app and Gmail

    - by Axeva
    I've just setup my Apple Mail client to work with Google Apps through IMAP. One lingering question is how to best handle SPAM (Junk Mail), however. In their Help section, Google recommends that we disable junk filtering on the client. http://mail.google.com/support/bin/answer.py?answer=78892 This leads me to wonder what we should do when a junk message makes it past Google's filter? Do I just delete the message? If I do, the Google spam filter will never improve and "learn" that the message was junk. Do I have to log in to the web interface at Google to mark the message as spam? That seems a bit arduous for every spam email I get. What's the best way to handle this? Thanks!

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Handling inconcistent resource availability in Project 2007

    - by Lachlan McDonald
    Afternoon all, I have four resources; a project manager, and three developers. The project manager can work anywhere from 9 to 5pm each day, but only for a total of 10 hours per week. It doesn't matter when he works, as long as he isn't over-allocated 10 hours per week. The developers on the other hand can only work up to 2 hours per day, for a total of 10 hours per week. If they work more than 2 hours in a day, they are over-allocated. How do I best configure Project to handle this kind of scheduling requirement?

    Read the article

  • Handling range in CNAME

    - by Imran
    We have different set of CNAMEs pointing to different subdomains. These subdomains (a.domain.com, b.domain.com) are pointing to different IPs on different machines. # Server A a1.domain.com pointing to a.domain.com a2.domain.com pointing to a.domain.com .. aN.domain.com pointing to a.domain.com # Server B b1.domain.com pointing to b.domain.com b2.domain.com pointing to b.domain.com .. bN.domain.com pointing to b.domain.com Currently, we have to add individual CNAME entries (eg. a1... aN) against a single subdomain (a.dominan.com). We repeat the above process for every new server which is actually another subdomain (e.g. c.domain.com). Is there a way we can specify a range of CNAMEs (e.g. [a1..a25].domain.com point to a.domain.com) instead of adding separate CNAME etnries? Is there any possibility to handle this at DNS or webserver (apache or Nginx) level?

    Read the article

  • uWSGI and Nginx python file handling

    - by user133507
    I've been trying to figure out how to propertly utilize uWSGI with Nginx and have hit a bit of a design roadblock. I'm trying to figure out how my python files should be accessed via uWSGI. I've been able to find 3 different ways to do so: Create a uWSGI process for each python file and then create locations in nginx that pass to each uWSGI process. Create one instance of uWSGI and create a master python file that handles all the different requests. Create one instance of uWSGI and setup dynamic applications I'm coming from LightTPD where I simply setup rewrites to point at the different python files. I feel like 3 is the closest to that but uWSGI says that it is not the recommended way of going about it.

    Read the article

  • Eclipse And Linux: Keyboard unusable after gnome-screen-saver

    - by martijn-courteaux
    Hi, I know this is not programming related. But I can't find any topics on Google or UbuntuForums. So the problem is: When gnome-screensaver starts on the moment Eclipse has the focus and I wake up again my laptop, Eclipse doesn't listen to keyboard-events. To solve this I have to change the focus to another program and then back to Eclipse. Than it works again. This isn't a real problem, but it would be nice if someone can solve it. Thanks

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Turn off error hiding on iis7

    - by Toby Allen
    I just installed IIS 7 on Vista, got PHP running, but I cant avoid the stupid default 500 error page if there is a problem with my php script, eg a compliation error etc. How do I set up IIS (or maybe php) to show me the error thats causing the 500 problem. Thanks

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • Proper DNS records for handling subdomains and missing subdomains

    - by Cerin
    I'm trying to craft DNS records to support: Explicitly defined subdomains, e.g. ftp.mydomain.com A missing subdomain that redirects to www. Implicitly defined subdomains, e.g. <some user entered value>.mydomain.com For 1, I'm using CNAME records. All seems to be working well. For 2, I'm using an A record, @ -> 123.456.789.012. Worked well. For 3, I ran into some trouble. I tried adding another A record, * -> 123.456.789.012. This appeared to work initially, but it broke #2. i.e. now browsing to mydomain.com doesn't redirect to www.mydomain.com. I tried adding the CNAME record @ -> 123.456.789.012, but my DNS admin tool won't accept it because it's saying the @ is already in use, even though I deleted the A record using it. Am I configuring this incorrectly? What am I doing wrong?

    Read the article

  • Handling FreeBSD package upgrades using pkg_add

    - by larsks
    I'm trying to use FreeBSD's pkg_add command to install and upgrade binary packages in a build-once-install-on-multiple-machines sort of scenario. It works well when installing a new package, but upgrades are baffling me. For example, if I want to upgrade a package that is depended on by another package, I can't just install it: # pkg_add /path/to/somepackage-2.0.tbz pkg_add: package 'somepackage' or its older version already installed At this point, I can delete the older version of the package if I pass -f to the pkg_delete command: # pkg_delete -f somepackage-1.0 pkg_delete: package 'somepackage-1.0' is required by these other packages and may not be deinstalled (but I'll delete it anyway): anotherpackage-1.0 But...and this is the killer...now the dependency information is gone! I can install the upgrade: # pkg_add /path/to/somepackage-2.0.tbz And now attempts to delete it will succeed without any errors: # pkg_delete somepackage-2.0 How do I handle this gracefully (whereby "gracefully" means "in a fashion that preserves dependency information without requiring me to rebuild/reinstall and entire dependency chain"). Thanks!

    Read the article

  • Set global handling for PHP scripts in NGINX + PHP-FPM

    - by Radio
    I have to define fastcgi_pass for every virtual host. How do I define it global-wise? server { listen 80; server_name www.domain.tld; location / { root /home/user/www.domain.tld; index index.html index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/user/domain.tld$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Error handling pppd configuration?

    - by Sebastian
    I have a computer, running ubuntu, that I want to be connected to the network at all times. It has a GSM modem connected to it via USB. Is there some program available that monitors that the networking is working (by pinging some site(s)), and if not tries to fix the error by reseting the modem, reloading the USB drivers and perhaps by dropping the USB port's current (if possible) to force the modem to reset ?

    Read the article

  • tool for advanced ID3 tags handling and audio files ordering

    - by Juhele
    I have following problem – some of my files do not have complete ID3 tags and some have typos or small differences in writing - so finally, my portable player sees “Mr. President” as different artist from “Mr President” and so on. I would need some tool which could search similar tags and then allow me to correct the typos or for example override “artist” in all selected files by manually entered text. The same with empty tag items – sometimes, the track name, album etc. is OK, but artist is missing etc. Without touching the audio quality, of course (but this should be no problem, I think). I already tried tools in Winamp, Songbird and other players and currently most advanced free tool I tried is Tagscanner. However, it is not able to to solve the problem with similar tags. Do you know such tool? Preferably free and for Windows, if possible. However, if you know some commercial app able to do this, please let me know.

    Read the article

  • Advanced ID3 tags handling and audio files ordering

    - by Juhele
    Some of my files do not have complete ID3 tags and some have typos or small differences in writing – so finally, my portable player sees “Mr. President” as different artist from “Mr President” and so on. I would need some tool which could search similar tags and then allow me to correct the typos or for example override artist in all selected files by manually entered text. The same with empty tag items – sometimes, the track name, album etc. is OK, but the artist is missing etc. I'd like to do this without touching the audio quality, of course (but this should be no problem, I think). I already tried tools like: Winamp Songbird other players Tagscanner – the most advanced free tool I tried. However, it is not able to to solve the problem with similar tags. Do you know such tool? Preferably free and for Windows, if possible. However, if you know some commercial app able to do this, please let me know.

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >