Search Results

Search found 89681 results on 3588 pages for 'cross server'.

Page 576/3588 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Using Find, Grep, Awk, or Sed To Rename Server After Cloning

    - by ServerChecker
    My client tells me they have cloned a VM in VMWare of an Ubuntu Linux server. Now it's my job to get into all the files and find out what still has the old server name of "bishop" and change it to something else. Also, the IP address is changed and I need to search for that too. How would you typically use find, grep, awk, or sed to find these files and then change them rapidly? In the end, I want to make a Bash script. Of course, I'm not expecting you to tell me every file, but just want to know the technique for finding files with "x" in it and then switching that rapidly with "y".

    Read the article

  • DNS servers and load balancing

    - by RadiantHex
    Hi there! I'm wondering if a simple DNS server could offer, even a limited amount, of load balancing capability. I have a couple of servers and I've been told that multiple IPAddresses can be associated with one domain. Help would be very much appreciated!

    Read the article

  • os x 10.4 server enable mail for account via terminal

    - by Chris
    Hello- I have an account on an OS X 10.4 server that I don't have physical access to (must use SSH). For arguments sake, let's call the account 'Bob'. Bob's account exists and appears to be fully functional, however he does not have email. How do I enable, via terminal, email for Bob's account, such that he can receive email at [email protected]? I already have the mail server all set up with several working accounts in it, I just need to add Bob. I have searched all over Google for over six hours now, but can't seem to find an answer that fits my situation. Any help is appreciated. P.S. - I am not adverse to just deleting the account and starting over, if that would make things easier...

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • FileZilla Server Configuration Problems

    - by LiamB
    I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory. I then connect to the server from the client computer Status: Connecting to {IP} Status: Connection established, waiting for welcome message... Response: 220-Welcome To {NAME} FTP Response: 220 {DOMAIN} Command: USER {USER} Response: 331 Password required for {USER} Command: PASS ********* Response: 230 Logged on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode ({}DATA) Command: MLSD The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails. Any suggestions on how to debug this more?

    Read the article

  • Setting up a chroot sftp on debian server

    - by Kevin Duke
    I'm trying to allow a user "user" to access my server by either sftp or ssh. I want to jail them into a directory with chroot. I read the instructions here however it does not work. I did the following: useradd user modify /etc/ssh/sshd_config and added Match User user ForceCommand internal-sftp ChrootDirectory /home/duke/aa/smart to the bottom of the file changed the subsystem line to Subsystem sftp internal-sftp restarted sshd with /etc/init.d/ssh restart logged in with ssh as user "user" with PuTTY Putty says "Server unexpectly closed the connection". Why is this and how can it be fixed? EDIT Following the suggestions below, I've made the bottom of sshd_config look like: Match User user ChrootDirectory /tmp yet no change. I do get a password OK but I cannot connect via ssh nor sftp. What gives?

    Read the article

  • Gradually migrate from one SMTP server to another

    - by Bart van Wissen
    I maintain an application that sends out a ton of e-mail on a daily basis. Soon, we will have to migrate to another SMTP-server for that, which has an ip address that has no reputation with respect to email delivery. So instead of just flipping the switch, I would like to start by sending a small percentage of all mail through the new server, and then gradually increase that percentage until we reach 100%. It wouldn't be very hard to implement something in the application itself, but I would like to know if there is an easier, more reliable out-of-the-box-type solution for this. My first thought was to use round-robin DNS for this, but the servers require different credentials, use different protocols (one uses SASL, the other doesn't) and even different port numbers, so I think that rules out the DNS based solution. Is there any way, for example, to configure Postfix to send 1 out of x e-mails to relay host A and the rest to relay host B? Or perhaps a different MTA?

    Read the article

  • Active Sync Login Time Restriction

    - by Hovaness Bartamian
    Is there any other location restricting a user from accessing their emails from their phone using active sync, besides on a domain controller -active directory-user properties - account - logon hours. The user's logon hours are allowed all the time. To add to the mystery the user is an admin on the domain.(He seems not to be able to get emails from 7:00pm to 8:00pm) Its a Windows 2003 Server environment. Thanks for any help ahead of time.

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • Directly reading a LTO tape drive

    - by John
    On our server (M$ 2003) is it possible to directly read our LTO 4 tape drive and copy the entire ntbackup created bkf file on it to an external hard disk? (Is the tape backup even stored on a tape as a bkf file, I’m going off when we only used external usb HD’s.)

    Read the article

  • SFTP is not connecting to remote server

    - by Crono15
    $ sftp -vvv Remote_IP Connecting to Remote_IP... OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to Remote_IP [Remote_IP] port 22. debug1: connect to address Remote_IP port 22: Operation timed out ssh: connect to host Remote_IP port 22: Operation timed out Connection closed I set up an account for SFTP only access with a chroot. I tested that on the server and it works fine. The problem is, I could not get remote SFTP access to the server to work right. The example above is what I keep on running into. I have been trying to figure out how to solve this problem for 2 days now. I am not sure if it have to do with /etc/ssh/sshd_config. Is it something that I am not aware of? I am hoping that you could help point me to the right place for this issue.

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • Run a MongoDB configuration server without 3GB of journal files

    - by Thilo
    For a production sharded MongoDB installation we need 3 configuration servers. According to the documentation "the config server mongod process is fairly lightweight and can be ran on machines performing other work". However, in the default configuration, they all have journalling enabled, and with preallocation this takes up 3 GB of disk space. I assume that the actual data and transaction volume of a config server is quite small, so that this seems a bit too much. Is there a way to (safely!) run these config servers with much less disk use for the journal? Do I need journalling at all on config servers? Can I set the journal size to be smaller?

    Read the article

  • Suggestions on mail servers

    - by Dejan.S
    Hi. I got a app sending massive newsletters and so far I been using the regular smtp within the iis7. I been looking at mail servers and There are plenty out there and I wonder what your tips and experiences are on this? What mail servers are good and easy to use with windows server 2008, iis7.. Thanks

    Read the article

  • AppUpdater and wamp server

    - by Gerbrand
    I've got an winforms application and I want to implement auto updater to this application. I followed the instructions on the site and got everything setup right. (appupdater application) Now I tried a test and I'm getting the following error back from the appupdater: the remote server returned an error 405 I googled this and this is because my server isn't setup with the right access. I'm using a wampserver and in the apache httpd.conf I added the following lines so my directory is allowed for access: <Directory "c:/wamp/www/updater/V11/"> Options Indexes FollowSymLinks AllowOverride all # onlineoffline tag - don't remove Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> But I'm still getting the same error back. I can find information for the IIS configurations, but not for apache. edit: I'm still getting the error, I opened the error log file of apache and I see the following - "PROPFIND /updater/V11/ HTTP/1.1" 405 238 the updater component is using the HTTP-DAV. edit2: it seems that nobody had this kind of situation.

    Read the article

  • Setting up Ubuntu Server on Amazon EC2 for hosting multiple domains with wildcard subdomains

    - by Ashish Kumar
    I'm trying to set up multiple domains on my Amazon EC2 micro instance running Ubuntu Server 12.04. I installed Apache correctly and set up virtual hosts but having problems with wildcard subdomains. This is what my httpd.conf file looks like NameVirtualHost *:80 <VirtualHost *:80> UseCanonicalName Off VirtualDocumentRoot /home/username/domains/%0/html/ </VirtualHost> My DNS records (on Amazon Route 53) are: domain.tld A 1.2.3.4 *.domain.tld A 1.2.3.4 If i create a test.domain.tld directory with the html subdirectory, it works fine. But what I want to do is to redirect *.domain.tld to domain.tld in case there is no directory for the sub-domain accessed. I would also like www.domain.tld to redirect to domain.tld. The system should also work if I decide to host another website, example.com, on the server. I tried Googling a lot but without any luck. Suggestions?

    Read the article

  • Ubuntu server 10.04 doesn't boot into installed Gnome desktop automatically

    - by Tong Wang
    I've installed Ubuntu server 10.04 and then installed Gnome desktop on top of it, because I am new to Linux and its command line, I need the GUI desktop to help me get around. However, the problem I got is that the server doesn't boot into the GUI desktop when powered on. It's booting into a shell like this: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enought?) - check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/cecdata-root does not exist. Dropping to a shell! BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) result of (cat /proc/cmdline) BOOT_IMAGE=/vmlinuz-2.6.32-28-server root=/dev/mapper/cecdata-root ro quiet Then I have type "exit" to exit the shell and then it boots into Gnome. Any idea what's wrong? Edit: add output for the following commands wt@cecdata:~$ ls /dev/mapper/ cecdata-root cecdata-swap_1 control wt@cecdata:~$ fdisk -l wt@cecdata:~$ wt@cecdata:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/cecdata-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=1635be41-d025-405e-b4a3-6f0abedb7aab /boot ext2 defaults 0 2 /dev/mapper/cecdata-swap_1 none swap sw 0 0 wt@cecdata:~$ Adding output for lsmod wt@cecdata:~$ lsmod Module Size Used by fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit dell_wmi 2177 0 dcdbas 6918 0 vga16fb 12757 1 vgastate 9857 1 vga16fb psmouse 64576 0 serio_raw 4950 0 power_meter 9473 0 bnx2 72874 0 lp 9336 0 parport 37160 1 lp mptsas 50592 2 usbhid 41116 0 mptscsih 37167 1 mptsas hid 83568 1 usbhid mptbase 91674 2 mptsas,mptscsih scsi_transport_sas 33021 1 mptsas

    Read the article

  • Is there a Kerberos testing tool?

    - by ixe013
    I often use openssl s_client to test and debug SSL connections (to LDAPS or HTTPS services). It allows me to isolate the problem down to SSL, without anything getting in the way. I know about klist that allows me to purge the ticket cache. Is there tool that would allow me to ask a Kerberos ticket for a given server, not event sending it ? Just enough to see the whole Kerberos exchange in Wireshark for example ?

    Read the article

  • Can't change to Korean-named directory on my debian server

    - by DaLynX
    I made a rsync backup of some directories from a macbook laptop to a debian server. Some of these have korean characters (Hangeul) in their names. After fixing my server's locale, it displays well when I do a ls for instance. But I can't cd to it. Example: $ ls -1 | head ??? dirA dirB … But if try to go browse that directory: $ cd ? ? ? cd: 3: can't cd to ??? Any idea what's wrong and how to fix it ?

    Read the article

  • Windows 2003 Dynamic Disk error

    - by ChrisH
    Hi, I was trying to ghost a partition on a Windows 2003 server, using Ghost 2003. Unfortunately things went horribly wrong, and now I can't boot back into my system. As you can see, Ghost creates a wee little partition to do its dirty work, and has dislodged my other partitions. Partition 2 in the image below is my C drive. Any suggestions as to how I might get this active again so that it boots? Cheers, Chris

    Read the article

  • MSDTC failed in windows cluster any idea???

    - by Cute
    Hi I have created a windows2003 cluster and then try to configure MSDTC by following the link http://support.microsoft.com/kb/301600/#appliesto after follow this i have enable network DTC access in Windows Server 2003 by following http://support.microsoft.com/kb/817064/ But the group i have created MSDTC( as said in first link from 7th point onwards ) for the resourse Distribution Transaction Coordinator failed and all are in online after done this. But i dont know why it is failing.. I dont know how to post the screen shot here.....

    Read the article

  • Getting Squid to authenticate with kerberos and Windows 2008/2003/7/XP

    - by Harley
    This is something I setup recently and was quite a big pain. My environment was getting squid to authenticate a Windows 7 client against a Windows 2008 Server invisibly. NTLM is not really an option, as using it requires a registry change on each client. MS have been recommending Kerberos since Windows 2000, so it's finally time to get with the program. Many, many thanks to Markus Moeller of the Squid mailing lists for helping to get this working.

    Read the article

  • Setting up DNS server on VPS on the internet

    - by Nick Duffell
    I have followed multiple online tutorials on setting this up, it is BIND9 on a debian server. It is the only server I have, so it is acting as both ns1, ns1, and the server they domain name should point to itself. It all appears to be working and when I dig the domain name from the server itself I get (what seems to me) the correct output: ; << DiG 9.7.3 << theonetekkit.com.au ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 18593 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;theonetekkit.com.au. IN A ;; ANSWER SECTION: theonetekkit.com.au. 3000 IN A 103.4.17.189 ;; AUTHORITY SECTION: theonetekkit.com.au. 3000 IN NS ns1.theonetekkit.com.au. theonetekkit.com.au. 3000 IN NS ns2.theonetekkit.com.au. ;; ADDITIONAL SECTION: ns1.theonetekkit.com.au. 3000 IN A 103.4.17.189 ns2.theonetekkit.com.au. 3000 IN A 103.4.17.189 ;; Query time: 15 msec ;; SERVER: 103.4.17.189#53(103.4.17.189) ;; WHEN: Wed Nov 7 02:12:58 2012 ;; MSG SIZE rcvd: 121 When I dig it from another server / computer, however, I am getting a problem: ; << DiG 9.7.3 << theonetekkit.com.au ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: SERVFAIL, id: 56637 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;theonetekkit.com.au. IN A ;; Query time: 22 msec ;; SERVER: 103.4.16.166#53(103.4.16.166) ;; WHEN: Wed Nov 7 02:12:40 2012 ;; MSG SIZE rcvd: 37 I have given it more than enough time for the records to be refreshed since setting up the DNS server, so I don't know what would be causing this. Any ideas? Thanks

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >