Search Results

Search found 9658 results on 387 pages for 'authentication provider'.

Page 310/387 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • OS X Keeps prompting me for SSH private key passphrase (OS X 10.6.8)

    - by Danny Englander
    I have a private key to ssh into my server and the connection works. In my hosts file I have: Host myhost HostName xxx.xxx.xxx.xx GlobalKnownHostsFile ~/.ssh/known_hosts port 22 User myuser IdentityFile ~/.ssh/mykey_dsa IdentitiesOnly yes .. and then I type ssh myhost Every time I connect, I get the Mac OS X keychain prompt and I tell OS X to remember the passphrase but then when I disconnect from ssh and re-connect, I am prompted to add the passphrase to the keychain again. This is only a recent problem so I suspect and issue with Keychain? To be clear, I can 're-add' to keychain every time and connect but this defats the purpose. The permissions on my dsa key are set at 600 or -rw-------@ I tried repairing disk permissions but that did no good. My Google-foo is also failing me, nothing of use came up. So I am not sure if this an OS X / keychain issue or an SSH issue. update: When I try ssh -vvv myhost, I think it reveals the issue: debug1: Trying private key: /Users/danny/.ssh/mykey_dsa debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug3: Not a RSA1 key file /Users/danny/.ssh/mykey_dsa. debug1: read PEM private key done: type DSA Identity added: /Users/danny/.ssh/mykey_dsa (/Users/danny/.ssh/mykey_dsa) debug1: read PEM private key done: type DSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentication succeeded (publickey). ... and after that I get connected. I think this crux of the matter is: PEM_read_PrivateKey failed

    Read the article

  • Referencing groups/classes from Puppet dashboard in my site manifest

    - by Banjer
    I'm using Puppet Dashboard as my ENC and I'm not sure how to reference or use class and group classifications from /etc/puppet/manifests/site.pp. I have two groups defined in the dashboard: CentOS6 and SLES11. What should my site.pp look like if I want to include a certain list of modules in the CentOS6 group and a certain list of modules in the SLES11 group? I'm trying to do something like this: # /etc/puppet/manifests/site.pp node basenode { include hosts include ssh::server include ssh::client include authentication include sudo include syslog include mail } node 'CentOS6' inherits basenode { include profile } node 'SLES11' inherits basenode { include usrmounts } I have OS-specific case statements within my modules, but there are some modules that will only be applied to a certain distro. So I suppose I have two questions: Is this the best way to apply modules/resources in an OS-specific manner? Or does the above make you want to vomit? Regardless of #1, I'm still curious as how to reference classes, groups, and nodes from Dashboard within my manifests. I've read the External Nodes doc, but I'm not seeing how they correspond to manifests. Thanks all.

    Read the article

  • subdomain .htaccess redirection via ssh remote port forwarding

    - by Achim
    I ask you to help me URL redirecting a subdomain to a SSH remote forwarded port: The current setup is the following: The server A have a local webserver running on port 80. This server is connected to a DSL line or a GPRS connection where the IP address changes often. To prevent a DynDNS setup we established a SSH remote port forwarding to a server B with a static IP adress. This is done on server A by the following statement: ssh -N -p 80 -g -R 10000:localhost:80 tunneling@<Server B IP> So by accessing the new port 10000 of the servers B IP-adress, all traffic is forwarded to the server A port 80 - this works fine! But to offer a more comfortable url to the user I want to hide the server B IP-adress and offer a subdomain. My domain provider allows to add subdomains and redirections to some other servers. In general, this works, I've tested this with different servers. But it don't work if the destination is the port forwarded port of server B. The initial redirection is done, the request is send to server A and the response are forwarded to server B and shown in the browser - fine. But then the URL within the browser is switched away from the subdomain to the IP:port of server B. So the user don't see the subdomain in the URL string of the browser anymore. I've tried this with my providers subdomain redirection, as well as .htaccess redirect, as well as META refresh, the problem always persist. Is there a parameter in the ssh reverse forwarding setup (I guess this is the place where the fix have to be) to keep the typed in subdomain URL and not show the IP. Thanks Achim

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • is it good to have or difference between ADSL Modem+WiFi Router and Separate ADSL Modem & Wi-Fi Router?

    - by vikas devde
    I have ADSL2 Modem which I got from my service provider, now I want to setup wireless(Wi-Fi) in my home. I went to shop, where I came to know that there are routers which come up with modem also but they are priced lil higher than the only wi-fi routers. Now it is obvious that I should go for only wi-fi one, as I already have modem. My question is, is there any difference between ADSL+router and only router? I think if I use ADSL+router, the speed will increase lilbit, as modem does modulate and demodulate signals, and router also generates wireless signal, that is time to take conversions is doubled, and if I use ADSL+modem, it will directly convert the signals to wireless, and time will be saved, so the overall speed will increase lilbit. This is what my concept is(Which might be wrong). What do you guys would suggest me? should I take my current modem away and buy an ADSL+router or I should keep my modem and buy only wi-fi one. Please tell me the difference and suggest me which one I should go with, and also suggest me which company router I should go for.

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • not able to register sip user on red5server, using red5phone

    - by sunil221
    I start the red5, and then i start red5phone i try to register sip user , details i provide are username = 999999 password = **** ip = asteriskserverip and i got --- Registering contact -- sip:[email protected]:5072 the right contact could be --- sip :99999@asteriskserverip this is the log: SipUserAgent - listen -> Init... Red5SIP register [SIPUser] register RegisterAgent: Registering contact <sip:[email protected]:5072> (it expires in 3600 secs) RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout RegisterAgent: Failed Registration stop try. Red5SIP Client leaving app 1 Red5SIP Client closing client 35C1B495-E084-1651-0C40-559437CAC7E1 Release ports: sip port 5072 audio port 3002 Release port number:5072 Release port number:3002 [SIPUser] close1 [SIPUser] hangup [SIPUser] closeStreams RTMPUser stopStream [SIPUser] unregister RegisterAgent: Unregistering contact <sip:[email protected]:5072> SipUserAgent - hangup -> Init... SipUserAgent - closeMediaApplication -> Init... [SIPUser] provider.halt RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout please let me know if i am doing anything wrong. regards Sunil

    Read the article

  • Windows Explorer slow to open networked computer, fast to navigate once opened

    - by Scott Noyes
    I open Windows Explorer and enter an IP for a computer on my home network (\\192.168.1.101). It takes 30 seconds or more to present a list of the shared folders. It does not appear to be an initial handshaking/authentication thing; even if I allow the view to load and then immediately load the same again, it is always slow. Once they appear, navigating through folders and opening files is fast. Also, navigating directly to a folder (\\192.168.1.101\My Music) is fast, even if it's the first connection since a restart. Using \\computerName instead of the IP address gives exactly the same results. Pings return in 1ms. net view \\computerName (or \ipAddress) returns the list of shared folders fast. This makes me suspect an Explorer issue rather than a network issue. Suspecting that the remote computer was being automatically indexed or something, I went into Tools-Folder Options-View and unchecked "Automatically search for network folders and printers," but that made no difference. De-selecting the "Folders" icon near the address bar makes no difference. Adding the IP address and computer name to the hosts file makes no difference. Both computers involved are laptops running Windows XP. Both have WiFi and cable adapters. Mine is not connected via cable. The result is the same whether the target is plugged in to the cable or not (although the IP address changes - 192.168.1.101 over cable, 192.168.1.103 over WiFi.) We are using DHCP assigned by the router.

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • IIS6: How to troubleshoot a 404 error in an ASP.NET application?

    - by Tomalak
    I have an ASP.NET application on a Windows Server 2003/IIS6 that refuses to run for some reason (it's the Xerox Centre, if that info helps). It has been working flawlessly before though on this server. Now, all I get if I try to open the app homepage (http://some.intranet.server/XeroxCentreWareWeb/) is a "404 - File or directory not found" error. The app is configured to run in it's own app pool, which runs as Network Service. The Network Service account has read access to the configured directory. If I stop the app pool, I get the expected "Service Unavailable" message, meaning the app and its pool are wired correctly I tried to track down any file permission issues with procmon - nothing to be seen. There isn't even an access to the web app directory happening when the page loads. Interestingly, according to procmon, the web server accesses the 401-2 custom error file (Logon failed due to server configuration) first, but then decides to send the 404 down to the client. EDIT: The app runs with Windows-integrated authentication. Regular users have access to the app directory as well (I would have noticed file system "ACCESS DENIED" messages in procmon, if there had been any.) This makes me think that there is some kind of weird permission problem that occurs even before the application files are being accessed. I just have no idea where to look. I've tried to run the app pool as Local System for a test, but to no avail. What else could I check in this case?

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

  • Filezilla client unable to get directory listing from Filezilla Server (Windows)

    - by sestocker
    I've set up a self signed certificate in FileZilla server and enabled FTP over SSL/TPS. When I connect from the client FileZilla, I am able to authenticate but cannot get a directory listing: Status: Connecting to MY_SERVER_IP:21... Status: Connection established, waiting for welcome message... Response: 220-FileZilla Server version 0.9.39 beta Response: 220-written by Tim Kosse ([email protected]) Response: 220 Please visit http://sourceforge.net/projects/filezilla/ Command: AUTH TLS Response: 234 Using authentication type TLS Status: Initializing TLS... Status: Verifying certificate... Command: USER MYUSER Status: TLS/SSL connection established. Response: 331 Password required for MYUSER Command: PASS ******** Response: 230 Logged on Command: PBSZ 0 Response: 200 PBSZ=0 Command: PROT P Response: 200 Protection level set to P Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PORT 10,10,25,85,219,172 Response: 200 Port command successful Command: MLSD Response: 150 Opening data channel for directory list. Response: 425 Can't open data connection. Error: Failed to retrieve directory listing I have ports 21 and 50001 through 50005 open on the firewall. We are migrating servers - the 50001 - 50005 is one of the things that helped get FTPS working on the old server. I'm not sure this installation would use the same ports? What else could be the problem?

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >