Search Results

Search found 55766 results on 2231 pages for 'error handling'.

Page 91/2231 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Handling email bounces

    - by Roland
    I have a website with approx 60 000 registered users, now the server sends out emails to these users eg Birthday mailers etc. Now the problem is I get a lot of bounces. Is there a way to manage these bounces, email addresses that do not exist anymore to capture them. I'm running Centos.

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • email handling with inbox.py and nginx

    - by Matt Ball
    I have a Flask web application running behind gunicorn and Nginx. Nginx proxies any traffic to ivrhub.org to the correct flask app. I would very much like to use inbox.py to process some incoming email. Running inbox.py's example on my server and then sending an email to [email protected] does not work as I intended. The inbox.py server does not seem to receive anything but the email also does not bounce. I'm missing something conceptually -- is there a DNS setting I need to configure or something I need to adjust with Nginx?

    Read the article

  • SMTP authentication error using PHPMailer

    - by Javier
    I am using PHPMailer to send a basic form to an email address but I get the following error: SMTP Error: Could not authenticate. Message could not be sent. Mailer Error: SMTP Error: Could not authenticate. SMTP server error: VXNlcm5hbWU6 The weird thing is that if I try to send it again, IT WORKS! Every time I submit the form after that first error it works. But if I leave it for a few minutes and then try again I get the same error again. The username and password have to be right as sometimes it works fine. I even created the following (very basic) script just to test it and I got the same result <?php require("phpmailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); $mail->Host = "smtp.host.com"; $mail->SMTPAuth = true; $mail->Username = "[email protected]"; $mail->Password = "password"; $mail->From = "[email protected]"; $mail->FromName = "From Name"; $mail->AddAddress("[email protected]"); $mail->AddReplyTo("[email protected]"); $mail->IsHTML(true); $mail->Subject = "Here is the subject"; $mail->Body = "This is the HTML message body <b>in bold!</b>"; $mail->AltBody = "This is the body in plain text for non-HTML mail clients"; if(!$mail->Send()) { echo "Message could not be sent. <p>"; echo "Mailer Error: " . $mail->ErrorInfo; exit; } echo "Message has been sent"; ?> I don't think this is relevant, but I just changed my hosting to a Linux shared server. Any idea why this is happening? Thanks! ***UPDATED 02/06/2012 I've been doing some tests. The results: I tested the script in an IIS server and it worked fine. The error seems to happen only in the Linux server. Also, if I use the gmail mail server it works fine in both, IIS and Linux. Could it be a problem with the configuration of my Linux server??

    Read the article

  • Need help trouble shooting Https webserver error - SSL Handshake failed

    - by DerNalia
    I followed this guide: http://hints.macworld.com/article.php?story=20041129143420344 Here is my virtual host definition <VirtualHost *:443> SSLEngine on SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL DocumentRoot "/Users/me/projects/myproject/public" ServerName ssl.mydomain.com ServerAlias *.ssl.mydomain.com SSLCertificateKeyFile "/private/etc/apache2/certs/webserver.nopass.key" SSLCertificateFile "/private/etc/apache2/certs/newcert.pem" SSLCACertificateFile "/private/etc/apache2/certs/demoCA/cacert.pem" SSLCARevocationPath "/private/etc/apache2/certs/demoCA/crl" ErrorLog "/Users/me/Desktop/ssl.log" ProxyPass / https://localhost:3002/ ProxyPassReverse / https://localhost:3002 ProxyPreserveHost on </VirtualHost> And when I try connecting to the sevre viov the web browser, I get this error: [Thu Feb 02 16:50:40 2012] [error] (502)Unknown error: 502: proxy: pass request body failed to 127.0.0.1:3002 (localhost) [Thu Feb 02 16:50:40 2012] [error] [client 96.11.81.39] proxy: Error during SSL Handshake with remote server returned by /session/new [Thu Feb 02 16:50:40 2012] [error] proxy: pass request body failed to 127.0.0.1:3002 (localhost) from 96.11.81.39 () how do I debug / fix this?

    Read the article

  • Handling Junk Email with Apple Mail.app and Gmail

    - by Axeva
    I've just setup my Apple Mail client to work with Google Apps through IMAP. One lingering question is how to best handle SPAM (Junk Mail), however. In their Help section, Google recommends that we disable junk filtering on the client. http://mail.google.com/support/bin/answer.py?answer=78892 This leads me to wonder what we should do when a junk message makes it past Google's filter? Do I just delete the message? If I do, the Google spam filter will never improve and "learn" that the message was junk. Do I have to log in to the web interface at Google to mark the message as spam? That seems a bit arduous for every spam email I get. What's the best way to handle this? Thanks!

    Read the article

  • Handling inconcistent resource availability in Project 2007

    - by Lachlan McDonald
    Afternoon all, I have four resources; a project manager, and three developers. The project manager can work anywhere from 9 to 5pm each day, but only for a total of 10 hours per week. It doesn't matter when he works, as long as he isn't over-allocated 10 hours per week. The developers on the other hand can only work up to 2 hours per day, for a total of 10 hours per week. If they work more than 2 hours in a day, they are over-allocated. How do I best configure Project to handle this kind of scheduling requirement?

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Handling range in CNAME

    - by Imran
    We have different set of CNAMEs pointing to different subdomains. These subdomains (a.domain.com, b.domain.com) are pointing to different IPs on different machines. # Server A a1.domain.com pointing to a.domain.com a2.domain.com pointing to a.domain.com .. aN.domain.com pointing to a.domain.com # Server B b1.domain.com pointing to b.domain.com b2.domain.com pointing to b.domain.com .. bN.domain.com pointing to b.domain.com Currently, we have to add individual CNAME entries (eg. a1... aN) against a single subdomain (a.dominan.com). We repeat the above process for every new server which is actually another subdomain (e.g. c.domain.com). Is there a way we can specify a range of CNAMEs (e.g. [a1..a25].domain.com point to a.domain.com) instead of adding separate CNAME etnries? Is there any possibility to handle this at DNS or webserver (apache or Nginx) level?

    Read the article

  • uWSGI and Nginx python file handling

    - by user133507
    I've been trying to figure out how to propertly utilize uWSGI with Nginx and have hit a bit of a design roadblock. I'm trying to figure out how my python files should be accessed via uWSGI. I've been able to find 3 different ways to do so: Create a uWSGI process for each python file and then create locations in nginx that pass to each uWSGI process. Create one instance of uWSGI and create a master python file that handles all the different requests. Create one instance of uWSGI and setup dynamic applications I'm coming from LightTPD where I simply setup rewrites to point at the different python files. I feel like 3 is the closest to that but uWSGI says that it is not the recommended way of going about it.

    Read the article

  • Eclipse And Linux: Keyboard unusable after gnome-screen-saver

    - by martijn-courteaux
    Hi, I know this is not programming related. But I can't find any topics on Google or UbuntuForums. So the problem is: When gnome-screensaver starts on the moment Eclipse has the focus and I wake up again my laptop, Eclipse doesn't listen to keyboard-events. To solve this I have to change the focus to another program and then back to Eclipse. Than it works again. This isn't a real problem, but it would be nice if someone can solve it. Thanks

    Read the article

  • Pgpool-regclass gives error when installling

    - by user119720
    I have a problem when installing the pgpool-regclass. When I'm running 'Make',it shows me this kind of error : p,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I. -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include -c -o pgpool-regclass.o pgpool-regclass.c pgpool-regclass.c:99:37: error: macro "RangeVarGetRelid" requires 3 arguments, but only 2 given pgpool-regclass.c: In function âpgpool_regclassâ: pgpool-regclass.c:99: error: âRangeVarGetRelidâ undeclared (first use in this function) pgpool-regclass.c:99: error: (Each undeclared identifier is reported only once pgpool-regclass.c:99: error: for each function it appears in.) make: *** [pgpool-regclass.o] Error 1 Can anyone help me to sort this things out?I really appreciate it. Thanks.

    Read the article

  • Server error 500 when adding .htaccess file in the root folder

    - by welou
    I read a lot of server error 500 questions related to the famous .htaccess file but I have still not found an answer to this error. I have a folder which I used to test the .htaccess file. =>http://localhost/xampp/example/ I copied the .htaccess in the example folder and I get server error 500 my .htaccess file contains this: <IfModule mod_expires.c> ExpiresActive on ExpiresByType image/png "access plus 1 days" </IfModule> I have changed my httpd.conf file: all AllowOverrride None to AllowOverride All i have also uncommented this line: LoadModule expires_module modules/mod_expires.so error.log says: ExpiresActive not allowed here I still get the error 500 What is actually happening? How can I resolve this error? PS: i am running on localhost with XAMPP

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • Proper DNS records for handling subdomains and missing subdomains

    - by Cerin
    I'm trying to craft DNS records to support: Explicitly defined subdomains, e.g. ftp.mydomain.com A missing subdomain that redirects to www. Implicitly defined subdomains, e.g. <some user entered value>.mydomain.com For 1, I'm using CNAME records. All seems to be working well. For 2, I'm using an A record, @ -> 123.456.789.012. Worked well. For 3, I ran into some trouble. I tried adding another A record, * -> 123.456.789.012. This appeared to work initially, but it broke #2. i.e. now browsing to mydomain.com doesn't redirect to www.mydomain.com. I tried adding the CNAME record @ -> 123.456.789.012, but my DNS admin tool won't accept it because it's saying the @ is already in use, even though I deleted the A record using it. Am I configuring this incorrectly? What am I doing wrong?

    Read the article

  • Handling FreeBSD package upgrades using pkg_add

    - by larsks
    I'm trying to use FreeBSD's pkg_add command to install and upgrade binary packages in a build-once-install-on-multiple-machines sort of scenario. It works well when installing a new package, but upgrades are baffling me. For example, if I want to upgrade a package that is depended on by another package, I can't just install it: # pkg_add /path/to/somepackage-2.0.tbz pkg_add: package 'somepackage' or its older version already installed At this point, I can delete the older version of the package if I pass -f to the pkg_delete command: # pkg_delete -f somepackage-1.0 pkg_delete: package 'somepackage-1.0' is required by these other packages and may not be deinstalled (but I'll delete it anyway): anotherpackage-1.0 But...and this is the killer...now the dependency information is gone! I can install the upgrade: # pkg_add /path/to/somepackage-2.0.tbz And now attempts to delete it will succeed without any errors: # pkg_delete somepackage-2.0 How do I handle this gracefully (whereby "gracefully" means "in a fashion that preserves dependency information without requiring me to rebuild/reinstall and entire dependency chain"). Thanks!

    Read the article

  • show php error message on IIS 7

    - by Max
    I am using IIS as a webserver on my development machine for PHP webdevelopment. Or at least, I am trying to. When there is a syntax error in a PHP script and I open that file in my webbrowser, I just get an 503 "internal server error" and the default IIS error page for this error. Some browsers dont open that file at all, possibly because of the 503 HTTP Response Header. I would like IIS to act in that case just like the apache webserver: display the PHP file with the error anyway, so that the error message gets printed out. How can this be done? EDIT: PHP settings: display_errors is on and error_reporting is set to E_ALL

    Read the article

  • Set global handling for PHP scripts in NGINX + PHP-FPM

    - by Radio
    I have to define fastcgi_pass for every virtual host. How do I define it global-wise? server { listen 80; server_name www.domain.tld; location / { root /home/user/www.domain.tld; index index.html index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/user/domain.tld$fastcgi_script_name; include fastcgi_params; } }

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >