Search Results

Search found 14236 results on 570 pages for 'times square'.

Page 402/570 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • gpfs: adding a new nsd server to a cluster

    - by alessandra
    I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks. I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the cluster so that the disks are available to all the nodes. Node B is connected to the cluster via ethernet. How can I make the new nsd seen by all the nodes of the cluster? I can create the new nsd but when trying to create the filesystem on node B it the command mmcrfs times out. It looks like the nodes of the cluster cannot understand the filesystem location even if I specify them attached to server B in the description file. Would it be better to remove node B from the cluster, create a cluster on its own with its attached filesystem and connect it remotely with the previous cluster? Or a clustered NFS solution would apply better? Can you please give me any suggestion?

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • How To Remove Bottleneck with Squid Caching Proxy

    - by Volomike
    I'm more of a LAMP web developer trying to help the sysop. When I joined a project, I inherited some old PHP spaghetti code. Some of that code is that it goes out to a third-party website (let's call it thirdparty.com) and pulls down content with an HTTP-GET request. Unfortunately, the way the code is designed, it needs to do this several times a minute. When we looked at the bottlenecks on the server with 'netstat -a', we saw that connections to thirdparty.com were constantly running when this content would be plenty fine to be gathered once a day. What I need to know is if the Squid Proxy Caching Server is the solution we need? I'm guessing that this might let us have it pretend to be thirdparty.com on the network. If the web server needs to query thirdparty.com, it hits Squid instead. Squid can then determine whether it needs to supply content from cache or if it needs to go to thirdparty.com for fresh content. Is this the solution we need? And second, is this easily configured and only to cache thirdparty.com requests?

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • How to rip AVI from VOB using ffmpeg

    - by Linux Jedi
    I am trying to convert a VOB to an AVI. I have ripped an AVI from this VOB before using ffmpeg, but for some reason it's not working this time. This is what I tried: ffmpeg -sameq -acodec copy -i VTS_01_2.VOB output.avi This is the output I get: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 [mpeg2video @ 0x101014200]mpeg_decode_postinit() failure Last message repeated 6 times Input #0, mpeg, from 'VTS_01_2.VOB': Duration: 26:30:29.20, start: 140.171311, bitrate: 90 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x480 [PAR 32:27 DAR 16:9], 9800 kb/s, 31.44 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1[0xa0]: Audio: pcm_s16be, 48000 Hz, 2 channels, s16, 1536 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 32:27 DAR 16:9], q=2-31, 200 kb/s, 29.97 tbn, 29.97 tbc Stream #0.1: Audio: pcm_s16be, 48000 Hz, 2 channels, 1536 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Could not write header for output file #0 (incorrect codec parameters ?)

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • redirect from mysite.com to www.mysite.com

    - by jml
    hi there, i know that this has been answered many many times, so if someone wants to point me to another thread that answers my question specifically, that is fine... for right now, my searches aren't yielding many results. so i have a website like mysite.com that has a flash swf embedded in it and i go to www.mysite.com ... all of the sudden, things don't work properly. i would like to get to the bottom of this, because it's not like the page just "doesn't load" at all; it loads and i can only do certain things; as if certain functionality is disabled (might be url requests for specific urls etc). do i need to manage this in my control panel? i wouldn't assume so, because the site loads; just has a crippled functionality from within the swf. i was thinking it might have more to do with my crossdomain.xml file; could this be the case? thanks for any tips or suggestions.

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • Munin 2 data not showing up on graph

    - by letronje
    I have a fresh installation of Munin 2.0.1 on my Ubuntu 12.04 and the first time I tried to view graphs, it showed them properly(After installation, I had to follow http://munin-monitoring.org/wiki/CgiHowto2 to set it up) After that, the graphs show up, but with with just one data point(single vertical line) as if no data is being collected after I tried it for the first time. In Munin 1.4, there was munin-cron which was run every 5 minutes and I saw new data being plotted in the graph atleast every 5 minutes. But If there is no cron job in v2, How does data collection work with Munin2 ? Is the data collected when the graphs are requested ? The file timestamps in /var/lib/munin have not changed after the first time I tried the graphs. But i do see munin-node process running(restarted in several times). I also see no errors in the munin node log files or apache2 log files. Any idea what could be wrong ? Screenshot : http://i.imgur.com/uzuAK.png Also, is there a way to pre-create graphs instead of doing it dynamically, on the fly ?

    Read the article

  • Troubleshooting Mid 2007 iMac RAM upgrade

    - by MDT
    I am trying to install new RAM in my friends iMac, something I have done several times before. We unplugged the computer before performing the upgrade, used anti static wrist bands, and yes the memory is compatible and inserted correctly. The stock RAM was Hynix 1gb pc2-5300s-555-12 and the memory we are replacing it with is 2x2gb Centon CMP800SO2048.01. Now I know this model number suggests that the ram is 800MHz and the iMac is only 667MHz but it clearly states on the box that this RAM is PC2-5300 667MHz compatible. The problem is, that when I install the new RAM I get little response from the computer. I hear the hard drive and disk drive start to initialize, but then they just stop and the screen remains black. I have tried every variation of the new RAM and the old RAM in both slots and even tried the same RAM from my old iMac and I just can't get it to boot. Has anyone ever had a problem like this? Thank you in advance for any and all input on thus issue!

    Read the article

  • How can I most efficiently batch resize images on a Mac?

    - by Nick Douglas
    I've been batch-resizing images through Preview (OS X) through the menu bar, but I want a simpler workflow, since I do this a dozen times a day. What I want: 1. Select a group of image files in finder 2. Hit a button or two (menu item or keyboard shortcut) to do the following: a. Scale all the pictures to 600 pixels wide b. Save as JPG files at 75% quality What I also want: - All of the above, plus step a(1): Crop images to 200 pixel height I can do all that manually, to a batch of files, through Preview. I can do it one at a time with some keyboard shortcuts in Photoshop or Pixelmator. Automator (using Preview) can scale to 600 pixels on the longest dimension, but it doesn't let me specify width. (It can scale specifically to width before cropping height.) It can change to JPG, but it can't specify image quality. And I can assign a keyboard shortcut to the whole process. Is that my best option on a Mac? Can I accomplish this more efficiently through another app like Quicksilver?

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • Makecert.exe hangs

    - by Robert
    I was following the steps in Scott Hanselman's blog post describing how to create a certificate authority and code signing certificate for PowerShell scripts. Initially, I created the certificate authority and a personal certifcate and used it to sign a powershell script successfully. All went as described in the blog post. The problem starts (as most do) when I did something that was (probably) stupid, although it seemed reasonable at the time. I wanted to start over and repeat the process again with a clean slate, so from the mmc certificates snap-in console, I deleted the personal certificate and the certificate authority I created previously. After that any time I try to use makecert, (just as I did the first time around), makecert either hangs or faults (which prompts to end or debug). Did I hose something up by deleting via the certificates snap-in? It didn't complain or warn me that it could be potentially hazardous. Is this just coincidence and something else entirely could be hosed? I have Event Log entries from the times when makecert crashed, which all look very similar; here is one: Log Name: Application Source: Application Error Date: 8/5/2009 3:55:04 PM Event ID: 1000 Task Category: (100) Level: Error Description: Faulting application makecert.exe, version 6.0.6000.16384, time stamp 0x4545910b, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e03821, exception code 0xc0000005, fault offset 0x00067409, process id 0xe58, application start time 0x01ca160efdf30625. Anyone have any ideas as to what exactly caused this and/or what I can do to fix it. I'm on 32-bit Vista Enterprise w/SP2.

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • Browsers hang when loading web pages

    - by avalanchis
    HP Laptop with Windows Vista Home Premium, Service Pack 2. Problem affects IE8 and Firefox. When attempting to browse most web sites, browser loads web site title in browser's title bar, then displays "Waiting for site url" and never completes loading the page. Able to connect to www.google.com with no problems. Unable to connect to www.carbonwy.com, www.wsj.com, www.cnn.com, www.microsoft.com, etc. Other PCs on same subnet are all able to connect to these sites without any problem. nslookup resolves all of these sites without any problem, so DNS does not appear to be the problem. Windows Firewall is disabled. Norton 360 was previously installed but has been uninstalled. No other firewall software is installed. IE8 Protected Mode is off. IE8 advanced settings have been reset. IE8 user settings have been reset. After reset, IE8 tries to connect to http://hp-laptop.aol.com and go.microsoft.com but same symptoms persist. Title bar loads, then nothing. System has been restarted numerous times. Running out of ideas. Please help!

    Read the article

  • 554 - Sending MTA’s poor reputation

    - by Phil Wilks
    I am running an email server on 77.245.64.44 and have recently started to have problems with remote delivery of emails sent using this server. Only about 5% of recipients are rejecting the emails, but they all share the following common message... Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. As far as I can tell my server is not on any blacklists, and it is set up correctly (the reverse DNS checks out and so on). I'm not even sure what the "Sending MTA" is, but I assume it's my server. If anyone could shed any light on this I'd really appreciate it! Here's the full bounce message... Could not deliver message to the following recipient(s): Failed Recipient: [email protected] Reason: Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. -- The header and top 20 lines of the message follows -- Received: from 79-79-156-160.dynamic.dsl.as9105.com [79.79.156.160] by mail.fruityemail.com with SMTP; Thu, 3 Sep 2009 18:15:44 +0100 From: "Phil Wilks" To: Subject: Test Date: Thu, 3 Sep 2009 18:16:10 +0100 Organization: Fruity Solutions Message-ID: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C2_01CA2CC2.9D9585A0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acosujo9LId787jBSpS3xifcdmCF5Q== Content-Language: en-gb x-cr-hashedpuzzle: ADYN AzTI BO8c BsNW Cqg/ D10y E0H4 GYjP HZkV Hc9t ICru JPj7 Jd7O Jo7Q JtF2 KVjt;1;YwBoAGEAcgBsAG8AdAB0AGUALgBoAHUAbgB0AC0AZwByAHUAYgBiAGUAQABzAHUAbgBkAGEAeQAtAHQAaQBtAGUAcwAuAGMAbwAuAHUAawA=;Sosha1_v1;7;{F78BB28B-407A-4F86-A12E-7858EB212295};cABoAGkAbABAAGYAcgB1AGkAdAB5AHMAbwBsAHUAdABpAG8AbgBzAC4AYwBvAG0A;Thu, 03 Sep 2009 17:16:08 GMT;VABlAHMAdAA= x-cr-puzzleid: {F78BB28B-407A-4F86-A12E-7858EB212295} This is a multipart message in MIME format. ------=_NextPart_000_01C2_01CA2CC2.9D9585A0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • WS2008 NTP - Using time.windows.com,0x9 - Time always skewed forwards

    - by David
    I have a domain controller configured to use time.windows.com (with 0x09 flags set). I've noticed that frequently the systems' clock is fast - it varies from 10 minutes to even 45 minutes. I always have to keep resetting the system date/time back to what it should be. When I run "w32tm /query /source" it tells me it's using time.windows.com, and obviously I trust Microsoft not to serve incorrect times, but why is my server's clock fast? EDIT: There are a few Time-Service events in the System log: Event ID: 142 Message: The time service has stopped advertising as a time source because the local clock is not synchronized. Event ID: 139 Message: The time service has started advertising as a time source. These two messages appear in pairs every hour or so. Event 142 appears 14 to 16 minutes after 139 appears. Going back a few months, these events appear: Event ID: 35 Message: The time service is now synchronizing the system time with the time source time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 37 Message: The time provider NtpClient is currently receiving valid time data from time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 47 Message: Time Provider NtpClient: No valid response has been received from manually configured peer time.windows.com,0x9 after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer with this DNS name. The error was: The time sample was rejected because: The peer is not synchronized, or it has been too long since the peer's last synchronization. These three events only appear once in the log, back in October.

    Read the article

  • AMIs in Amazon EC2

    - by Jack of Trades
    I really like the Amazon EC2 environment, and thought I'll spend a bit of time playing around with various types of public (Windows!) AMI servers. But testing has been a bit, well, questionable. Some of my findings: It's very difficult to know what exactly a specific public EC2 image is supposed to be doing. Many images come with little to no information. I can't seem to find the passwords to log onto various windows images. Why are they public if they can't be used!? Lots of images are based on S3, and not EBS backed. This is very annoying, as S3 takes a lot longer to do pretty much anything (stop, image etc.) I am only testing images here, so of-course I don't question the value of S3 for other attributes. The description of what an image does is almost useless and many times confusing. Have others come across these EC2 issues. Again, my interest was to just play around with public images for testing/experimentation/etc, and therefore these issues may not be too relevant for more normal EC2 deployment uses.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >