Search Results

Search found 5501 results on 221 pages for 'receive'.

Page 193/221 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Force Finder to log in as Guest to a SMB share

    - by slhck
    I have a QNAP NAS that offers a few SMB shares. As I'm in a trusted environment, my shares are accessible as guest rather than with a combination of username and password. Problem Now, when I click the name of the device in Finder's sidebar, I get the black "Connection failed" bar, with the option "Connect as...". When I click that, I receive: I can however press ? + K and enter the server's name manually, which gets me to this window: Here, I have to select "guest". Now, I can select one of the shares to connect to, and I'm finally connected to the server. If I select it in the sidebar, I get a list of all shares available, because I'm connected as "guest", obviously: What I need Well, as soon as I unmount all shares, I have to go through the same procedure of manually logging in as "guest" again, which I find quite annoying. Is there any way I could get Finder (or the underlying SMB client) to know which credentials to use? Or should I look for the solution rather on the server side? (I know that other SMB shares seem to work fine in my network) Diagnostics The only thing I can get out of Console.app is: 5/15/11 7:36:40 PM /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder[200] SharePointBrowser::handleOpenCallBack returned 64 This message occurs when I click the name of the SMB server in the Finder sidebar. Here's the output of `smbclient -L meredith -U guest -d=2 charon:~ werner$ smbclient -L meredith -U guest -d=2 added interface ip=192.168.100.11 bcast=192.168.100.255 nmask=255.255.255.0 tdb(unnamed): tdb_open_ex: could not open file /private/var/samba/gencache.tdb: Permission denied Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Password: Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Sharename Type Comment --------- ---- ------- music Disk movies Disk photos Disk software Disk archive Disk backups Disk IPC$ IPC IPC Service (NAS Server) Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Server Comment --------- ------- Workgroup Master --------- ------- WORKGROUP MEREDITH Also, things I've tried: There is no relevant entry in the Keychain (but why would it, I'm only connecting as guest) Connecting with user name "Guest" and empty password logs me in but still after ejecting the last share, I get the same "Connection failed" error as before. The appropriate entry is made in the Keychain but obviously has no effect.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • Exchange 2007 and migrating only some users under a shared domain name

    - by DomoDomo
    I'm in the process of moving two law firms to hosted Exchange 2007, a service that the consulting company I work for offers. Let's call these two firms Crane Law and Poole Law. These two firms were ONE firm just six months ago, but split. So they have three email domains: Old Firm: craneandpoole.com New Firm 1: cranelaw.com New Firm 2: poolelaw.com Both Firm 1 & Firm 2 use craneandpoole.com email addresses, as for the other two domains, only people who work at the respective firm use that firm's domain name, natch. Currently these two firms are still using the same pre-split internal Exchange 2007 server, where MX records for all three domains point. Here's the problem. I'm not moving both companies at the same time. I'm moving Crane Law two weeks before Poole Law. During this two weeks, both companies need to be able to: Continue to receive emails addressed to craneandpoole.com Send emails between firms, using cranelaw.com and poolelaw.com accounts I also have a third problem: I'd like to setup all three domains in my hosting infrastructure way ahead of time, to make my own life easier What would solve all my problems would be, if there is some way I can tell Exchange 2007, even though this domain exists locally forward on the message to the outside world using public MX record as a basis for where to send it (or if I could somehow create a route for it statically that would work too). If this doesn't work, to address points #1 when I migrate Crane Law, I will delete all references locally to cranelaw.com on their current Exchange server, and setup individual forwards for each of their craneandpool.com mailboxes to forward to our hosted exchange server. This will also take care of point #2, since the cranelaw.com won't be there locally, when poolelaw.com tries to send to cranelaw.com, public MX records will be used for mail routing decisions and go to my hosted exchange. The bummer of that though is, I won't be able to setup poolelaw.com ahead of time in hosted Exchange, will have to wait to do it day of :( Sorry for the long and confusing post. Just wondering if there is a better or simpler way to do what I want? Three tier forests and that kind of thing are out, this is just a two week window where they won't be in the same place.

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • WinPE, Startnet.CMD and passing variables to second batch file not working

    - by user140892
    I don't know scripting or PowerShell (yes I need to learn something). I'm not an expert batch file maker either. I have a WinPE flash drive which I used to deploy OS images. I have the WIM, drivers and anything needed else outside the WinPE environment to ensure that Updates, changes are easier for me to make. I use the "STARTNET.CMD" batch file which is part of the WinPE. The reason to go through the letter drives is that the WinPE always gets the X letter drive assigned. The flash drive itself can receive a random letter which always changes. My deployment menu is located on the flash drive it self and not inside the WinPE. This is so that if I need to make a change I don't have to re-do the WinPE. I am able to locate the "menu.bat" batch file and launch it. I use a variable to capture the letter drive. I call the second batch file named "menu.bat" and pass the variable to it. When the second batch file loads, I believe that I am calling the variable correctly. If I break out of the batch file I can echo the variable and see the expected reply. The issue is that I can't use the variable to work with anything on the second batch file. In my test, I can get this to work over and over. When it runs from the real USB flash drive it does not work. I removed comments from the second batch file to make it smaller. My issue is that files below all get a message stating that the system cannot find the path specified. Diskpart Imagex.exe bcdboot.exe Why can't I get the varible to properly function when I try to using example "ImageX.exe"? Contents of the Startnet.cmd @echo off for %%p in (a b c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%p:\Tools\ set w=%%p Set execpatch=%w%\Tools\ call %w%:\Menu.bat \Tools\ Contents of the Menu.BAT @echo off set SecondPath=%1 cls :Start cls Echo. Echo.============================================================== Echo. Windows 7 64 Bit Ent Basic Desktops Echo.============================================================== Echo. Echo A. 790 Windows 7 - Basic Echo. Echo. Echo I. Exit Echo. Echo. set /p choice=Choose your option = if not '%choice%'=='' set choice=%choice:~0,1% if '%choice%'=='a' goto 790_Windows_7_Basic echo "%choice%" is not a valid (answer/command) echo. goto start :790_Windows_7_Basic REM DISKPART /s %SecondPath%BatchFiles\Make-Partition.txt %SecondPath%imagex.exe /apply %SecondPath%Images\Win7-64b-Ent-Basic-SysPreped.wim 1 o:\ /verify %SecondPath%bcdboot.exe o:\Windows /s S: Copy %SecondPath%Unattended\unattend.XML o:\Windows\System32\sysprep\unattend.XML /y xcopy %SecondPath%Drivers\790\*.* o:\Windows\INF\790\ /E /Q /Y MD o:\Windows\Setup\Scripts\ Copy %SecondPath%BatchFiles\SetupComplete.cmd o:\Windows\Setup\Scripts\ /y Goto Done :Done Exit

    Read the article

  • PCI-DSS compliance for business with only swipe terminals [migrated]

    - by rowatt
    I support the IT infrastructure for a small retail business which is now required to undergo a PCI-DSS assessment. The payment service and terminal provider (Streamline) has asked that we use Trustwave to do the PCI-DSS certification. The problem I face is that if I answer all questions and follow Trustwave's requirements to the letter, we will have to invest significantly in networking equipment to segment LANs and /or do internal vulnerability scanning, while at the same time Streamline assures me that the terminals we have (Verifone VX670-B and MagIC3 X-8) are secure, don't store any credit card information and are PCI-DSS compliant so by implication we don't need to take any action to ensure their network security. I'm looking for any suggestions as to how we can most easily meet the networking requirements for PCI-DSS. Some background on our current network setup: single wired LAN, also with WiFi turned on (though if this creates any PCI-DSS complexities we can turn it off). single Netgear ADSL router. This is the only firewall we have in place, and the firewall is out the box configuration (i.e. no DMZ, SNMP etc). Passwords have been changed though :-) a few windows PCs and 2 windows based tills, none of which ever see any credit card information at all. two swipe terminals. Until a few months ago (before we were told we had to be PCI-DSS certified) these terminals did auth/capture over the phone. Streamline suggested we moved to their IP Broadband service, which instead uses an SSL encrypted channel over the internet to do auth/capture, so we now use that service. We don't do any ecommerce or receive payments over the internet. All transactions are either cardholder present, or MOTO with details given over phone and typed direct into terminal. We're based in the UK. As I currently understand it we have three options in order to get PCI-DSS certification. segment our network so the POS terminals are isolated from all PCs, and set up internal vulnerability scanning on that network. don't segment the network, and have to do more internal scanning and have more onerous management of PCs than I think we need (for example, though the tills are Windows based, they are fully managed so I have no control over software update policies, anti virus etc). All PCs have anti virus (MSE) and windows updates automatically applied, but we don't have any centralised go back to auth/capture over phone lines. I can't imagine we are the first merchant to be in this situation. I'm looking for any recommendations a simple, cost effective way to be PCI-DSS compliant - either by doing 1 or 2 above with (hopefully) simple and inexpensive equipment/software, or any other ways if there's a better way to do this. Or... should we just go back to the digital stone age and do auth/capture over the phone, which means we don't need to do anything on our network to be PCI-DSS certified?

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • =?UTF-8?B??= in Emails sent via php mail problem

    - by Camran
    I have a website, and in the "Contact" section I have a form which users may fill in to contact me. The form is a simple form which action is a php page. The php code: $to = "[email protected]"; $name=$_POST['name']; // sender name $email=$_POST['email']; // sender email $tel= $_POST['tel']; // sender tel $subject=$_POST['subject']; // subject CHOSEN FROM DROPLIST, ALL TESTED $text=$_POST['text']; // Message from sender $text.="\n\nTel:".$tel; // Added to message to show me the telephone nr to the sender at bottom of message $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); Could somebody please tell me why this works most of the time, but sometimes I receive email whith no text and the subject line showing =?UTF-8?B??= I use outlook express, and I have read this http://stackoverflow.com/questions/454833/system-net-mail-and-utf-8bxxxxx-headers but it didn't help. The problem is not in Outlook, because when I log in to the actual mailprogram where I fetch the POP3 emails from, the email looks the same. When I right click in Outlook and chose "message source" then there is no "From" information. Ex, a good message should look like this: Subject: =?UTF-8?B?w5Z2cmlndA==?= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: John Doe However, the ones with problem looks like this: Subject: =?UTF-8?B??= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: As if the information has been lost somewhere. You should know also that I have a VPS, which I manage myself. I use postfix as an emailserver, if thats got anything to do with it. But then again, why does it work sometimes? Also another thing that I have noticed is that sometimes special characters are not shown correctly (by both Outlook and the webmail). For instance, the name "Björkman" in swedish is shown like Björkman, but again, only sometimes. I hope anybody knows something about this problem, because it is very hard to track down for me atleast. If you need more input let me know.

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • long access times and errors in iis application

    - by Jens Olsson
    Hi, I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from [email protected] and [email protected] will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • Windows 2008 R2 IPsec encryption in tunnel mode, hosts in same subnet

    - by fission
    In Windows there appear to be two ways to set up IPsec: The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000). The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista). My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.) I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet: server2 172.16.11.20 server3 172.16.11.30 My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is: IP ESP IP …etc. First, on each computer, I created a Connection Security Rule: Endpoint 1: (local IP address), eg 172.16.11.20 for server2 Endpoint 2: (remote IP address), eg 172.16.11.30 Protocol: Any Authentication: Require inbound and outbound, Computer (Kerberos V5) IPsec tunnel: Exempt IPsec protected connections Local tunnel endpoint: Any Remote tunnel endpoint: (remote IP address), eg 172.16.11.30 At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None. Then on each server, I created Inbound and Outbound Rules: Protocol: Any Local IP addresses: (local IP address), eg 172.16.11.20 Remote IP addresses: (remote IP address), eg 172.16.11.30 Action: Allow the connection if it is secure Require the connections to be encrypted The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.) When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg: 2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE I've tried varying a few things: In the Rules, setting the local IP address to Any Toggling the Exempt IPsec protected connections setting Disabling rules (eg disabling one or both sets of Inbound or Outbound rules) Changing the protocol (eg to just TCP) But realistically there aren't that many knobs to turn. Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall? I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.

    Read the article

  • inews failed: "No colon-space in "X-MS-TNEF-Correlator:"

    - by wolfgangsz
    We run a news server for our engineering teams, which is also linked to the code repositories (so that all engineers can subscribe to any changes in the repos or just the projects they are interested in). On quite a regular basis (several times a day) I (as the sysadmin for that server) receive bounces from innd with the above as the first line. The news server simply rejects these messages and the articles don't get posted. Here is an example: inews failed: inews: cannot send article to server: 441 437 No colon-space in "X-MS-TNEF-Correlator:" header inews: article not posted -------- Article Contents Path: aminocom.com!ctaylor From: [email protected] (Cameron Taylor) Newsgroups: amino.qa.reports Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_" MIME-Version: 1.0 Subject: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Message-ID: Date: Thu, 9 Sep 2010 16:15:16 +0000 X-Received: from uk-swa-ex02.aminocom.com (uk-swa-ex02.aminocom.com [10.171.3.10]) by theoline.aminocom.com (8.14.3/8.13.8) with ESMTP id o89GF8tx019494 for ; Thu, 9 Sep 2010 17:15:08 +0100 X-Received: from uk-swa-ex01.aminocom.com ([10.171.3.9]) by uk-swa-ex02 ([10.171.3.10]) with mapi; Thu, 9 Sep 2010 17:15:18 +0100 X-To: QA Reports X-Thread-Topic: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** X-Thread-Index: ActQOjBdms0CSJsORNSxRIMSZ4H3Ow== X-Accept-Language: en-US, en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: X-Auto-Response-Suppress: DR, OOF, AutoReply --_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable SQA Test Report [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Status .... (rest of the message is not important) And yes, quite clearly this header doesn't have anything after the colon. The man page for innd doesn't specify why it rejects these messages, it just says it rejects them. So far I have found out these headers are linked to messages in RTF format (coming from Outlook clients), where normally the formatting information would be stored in a winmail.dat attachment. The clients all use MS Exchange 2010 servers to send their mail (identified above as uk-swa-ex02.aminocom.com) which forwards the message to the news server. Does anybody know what advice I need to give these users to avoid their articles getting bounced? Or can I change the behaviour of innd? Or do I need to filter these headers out before innd processes the articles?

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • New Exchange 2010 CAS cannot find domain controllers

    - by NorbyTheGeek
    I am experiencing problems migrating from Exchange 2003 to Exchange 2010. I am on the first step: installing a new 2010 Client Access Server role. The Active Directory domain functional level is 2003. All domain controllers are 2003 R2. The only existing Exchange 2003 server happens to be housed on one of the domain controllers. It is running Exchange 2003 Standard w/ SP2. IPv6 is enabled and working on all domain controllers, servers, and routers, including this new Exchange server. After installing the CAS role on a new 2008 R2 server (Hyper-V VM) I am receiving 2114 Events: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Topology discovery failed, error 0x80040a02 (DSC_E_NO_SUITABLE_CDC). Look up the Lightweight Directory Access Protocol (LDAP) error code specified in the event description. To do this, use Microsoft Knowledge Base article 218185, "Microsoft LDAP Error Codes." Use the information in that article to learn more about the cause and resolution to this error. Use the Ping or PathPing command-line tools to test network connectivity to local domain controllers. Prior to each, I receive the following 2080 Event: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Exchange Active Directory Provider has discovered the following servers with the following characteristics: (Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version) In-site: b.company.intranet CDG 1 0 0 1 0 0 0 0 0 s.company.intranet CDG 1 0 0 1 0 0 0 0 0 Out-of-site: a.company.intranet CD- 1 0 0 0 0 0 0 0 0 o.company.intranet CD- 1 0 0 0 0 0 0 0 0 g.company.intranet CD- 1 0 0 0 0 0 0 0 0 Connectivity between the new Exchange server and all domain controllers via IPv4 and IPv6 are all working. I have verified that the new Exchange server is a member of the following groups: Exchange Servers Exchange Domain Servers Exchange Install Domain Servers Exchange Trusted Subsystem Heck, I even put the new Exchange server into Domain Admins just to see if it would help. It didn't. I can't find any evidence of Active Directory replication problems, all pre-setup Setup tasks (/PrepareLegacyExchangePermissions, /PrepareSchema, /PrepareAD, /PrepareDomain) completed successfully. The only problem so far that I haven't been able to resolve with my Active Directory is I am unable to get my IPv6 subnets into Sites and Services Where should I proceed from here?

    Read the article

  • How to create NTFS partition in Linux to install Windows 7 from USB?

    - by Michal Stefanow
    I messed up with my computer and need help. Generally: install Windows 7 from USB. Problem: "setup was unable to create a new system partition" When first attempt to install Windows 7 failed I tried Linux live USB, installed distro to HDD, and erased all the existing partitions. Current state (fdisk -l): [writing from other computer so no copy and paste] /dev/sda1 305GB Linux /dev/sda2 7GB Extended /dev/sda5 7GB Linux Swam / Solaris To create a new, NTFS partition: fdisk /dev/sda n (for new) p (for primary) 3 (for partintion number) "No free sectors available" All the HDD was formatted couple of minutes before so there is a lot of free space but how to resize a parition? I cannot find an option for resizing in man fdisk. Some people say I should use gparted but my distro doesn't not contain this package. And my distro doesn't support wireless drivers so I have serious problems with downloading stuff. I tried also using cfdisk but any command results in: "cfdisk bad primary partition 1 partition ends in the final partial cylinder" I tried also removing partition 1 and then creating a new one (so there is no "no free sectors"). I'm receiving a warning: "Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot." After restating: "grub rescue, no known filesystem" It may indicate that some changes have been made BUT when running Windows 7 installed some another error: "Windows cannot be installed to Disk 0 Partition 1" More detailed: "Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS." So formatting drive using Windows 7 installer BUT this time yet another error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information" Apparently I cannot access logs (how?) and I am back to drawing board with my live USB (this time showing partition as HPFS/NTFS). Any suggestions how to install Windows 7? Should I reinstall Linux to HDD, erase existing partitions once again, and use Parted rather than gparted (parted is included in the distro). Or maybe should I create another bootable USB such as PartedMagic to painlessly create partitions? I just want to install Windows 7 from USB, my laptop is semi-operational and I am ready to receive some help regarding fdisk and creating NTFS partitions. UPDATE: I did as suggested (removed all the partitions) and tried to install in unallocated space. Tried to create a new partition and format it. Same error: "setup was unable to create a new system partition" Came to the conclusion it may have something to do with TrueCrypt I have recently installed. Right now trying to FIX MBR (as I haven't got possibility to create rescue disc without optical drive)

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >