Search Results

Search found 21644 results on 866 pages for 'connection speed'.

Page 724/866 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • NSclient++ NRPE issues

    - by Kyle
    I have had NSclient++ working with Nagios for a while now. Recently I started testing Nagwin just to see how it would work, out of pure curiosity. I stopped checking a test server with my main Nagios config, set NSclient++ to NRPE mode, and pointed Nagwin at it. It worked great for a few hours then suddenly I started seeing "UNKNOWN: No Handler for that command." I figured it has to be Nagwin's fault since it's so new, I'll just unload NRPElistner.dll and return my server to being monitored by check_NT. However now check_NT doesn't work my main Nagios server returns timeout errors and is unable to connect at all. My Nagwin server can connect to it, the server just doesn't know how to handle the check_NRPE commands even though it did with no changes a few hours earlier. I have been working on this for a day now and am fairly certain it is NSclient++ who is to blame here. My nagwin box has successfully stayed connected to a similar server throughout the night, without any issues. And my main Nagios config is not having any problems at all. I have been able to successfully switch another server between being monitored by nagios and nagwin without any problems by simply loading and unloading the NRPE.dll. I have tried uninstalling NSclient++ and reinstalling with fresh configuration but still receive the errors. As of now the firewall is off on the server, NSclient++ is setup to accept connection from any server, there is no password, I have also turned ssl off, and the NRPE module is loaded. Any Ideas would be appreciated, I am not an advanced Nagios user but I do know my way around it and can easily break it down and set it up again. I also want to add that while in test mode NSclient++ is unable to handle check_NRPE commands there either.

    Read the article

  • Cannot connect to my EC2 instance because of "Permission denied (publickey)"

    - by Burak
    In AWS console, I saw that my key pair was deleted. I created a new one with the same name. Then I tried to connect with ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com Here's the output: macs-MacBook-Air:~ mac$ ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to ********.compute-1.amazonaws.com [*****] port 22. debug1: Connection established. debug1: identity file sohoKey.pem type -1 debug1: identity file sohoKey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '*******.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/mac/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: sohoKey.pem debug1: Authentications that can continue: publickey debug1: Trying private key: sohoKey.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Update: I detached my old EBS and attached to the new instance. Now, how can I mount it?

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Windows Server 2008R2 - can't change or remove the default gateway

    - by disserman
    We've installed VMWare Server 2.0 on Windows 2008R2. After some time playing with it (actually only removing host-only and nat networks, and binding adapters to the specified vmnets) we've noticed a strange problem: if you change or remove the default gateway on the network card, the server completely loses a network connection you can't ping it from the subnet, it also can't connect to anyone. When the gateway is removed and a server tries to connect to the other machines, I can see some incoming packets using a sniffer, but I believe they are damaged in some kind (I'm not a mega-guru in TCP/IP and can't find a mistake in a binary translation of the packet) because the other side doesn't respond. What we tried: removed vmware server using add/remove programs deleted everything related to the vmware server and all installed network adapters in the windows registry double checked for the vmware bridged protocol driver file, it's physically absent and no any links in the registry. performed a tcp/ip reset with netsh and disabled/enabled all network adapters in the device manager to recreate a registry keys for them. tried another network adapter. and the situation is the same: as soon you remove or change the default gateway, windows stops working. The total absurd of the situation is that the default gateway points to the non-existing IP. But when it's set, you can ping a server from the subnet, when you remove it - you can't. Any help? I'm starting thinking the new build of the VMWare Server is some kind of the malware... :)

    Read the article

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • How to run some commands after booting from ArchLinux disk? Or how to change some settings in .iso before booting?

    - by Alexander Ovchinnikov
    How to install Arch Linux with traditional installer with only ssh-access to server? There is nice guide: https://wiki.archlinux.org/index.php/Install_from_SSH I try test this on my home vps: Start VPS with any linux bootable cd and login to remote server (vps) wget http://mirrors.kernel.org/archlinux/iso/latest/archlinux-2010.05-netinstall-x86_64.iso dd if=archlinux-2010.05-netinstall-x86_64.iso of=/dev/sda reboot ... I see, it works but without ssh connection... I need make script, which will send this commands after reboot: aif -p partial-configure-network (and write some information about my server ip etc.) /etc/rc.d/sshd start (need to start sshd) echo "sshd: ALL" /etc/hosts.allow (to allow me login to server, by default deny all) passwd (by default its empty, can't login via ssh with empty password) Can I edit .iso or may be /dev/sda? May be I need write script, which will start after system boot and do this things or may be I can set this settings by default and system will start with correct settings (i think its possible at least in 2. and 3.). Thank you!

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • DNAT to 127.0.0.1 with iptables / Destination access control for transparent SOCKS proxy

    - by cdauth
    I have a server running on my local network that acts as a router for the computers in my network. I want to achieve now that outgoing TCP requests to certain IP addresses are tunnelled through an SSH connection, without giving the people from my network the possibility to use that SSH tunnel to connect to arbitrary hosts. The approach I had in mind until now was to have an instance of redsocks listening on localhost and to redirect all outgoing requests to the IP addresses I want to divert to that redsocks instance. I added the following iptables rule: iptables -t nat -A PREROUTING -p tcp -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:12345 Apparently, the Linux kernel considers packets coming from a non-127.0.0.0/8 address to an 127.0.0.0/8 address as “Martian packets” and drops them. What worked, though, was to have redsocks listen on eth0 instead of lo and then have iptables DNAT the packets to the eth0 address instead (or using a REDIRECT rule). The problem about this is that then every computer on my network can use the redsocks instance to connect to every host on the internet, but I want to limit its usage to a certain set of IP addresses only. Is there any way to make iptables DNAT packets to 127.0.0.1? Otherwise, does anyone have an idea how I could achieve my goal without opening up the tunnel to everyone? Update: I have also tried to change the source of the packets, without any success: iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 1.2.3.4 -j SNAT --to-source 127.0.0.1 iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 127.0.0.1 -j SNAT --to-source 127.0.0.1

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • HP ProCurve & Cisco switches interoperability

    - by Kamil Z
    I have a couple of questions regarding Cisco and HP ProCurve interoperability. Here's a link to pdf with my network topology. Can someone help me with basic VLAN configuration in such topology? Below there are some details of my configuration: # m_management_2 interface FastEthernet0/43 switchport access vlan 250 switchport mode access spanning-tree port-priority 32 spanning-tree cost 100 # MTA2-swmgmt1 vlan 1 name "DEFAULT_VLAN" untagged 1-48 ip address 10.10.249.190 255.255.255.128 exit # MTA2-swtr1 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.188 255.255.255.128 exit # MTA2-swtr2 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.189 255.255.255.128 exit I don't post MTA2-bcsw[12] configuration, because I wasn's successfull in this one yet. Every time I configure VLANs on MTA2-bcsw[12] Fa0/24 interface on m_management_2 goes down bacause of receiving tagged BPDUs on access port (there are no VLANs configured on MTA2-swmgmt1 because of fact that only 250 VLAN is allowed in this switch. Is it correct?). Can someone provide me some basic configuration for this topology? Second thing I want to ask is concept of connection from MTA2-swmgmt1 to MTA2-swtr[12] HP switches for the sake of management. How to configure such ports on HP switches (managed switch and manager switch). Is my actual configuration correct?

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Asus WL-520GU conflicting subnet (and/or IP) with 2Wire DSL

    - by Paula
    I have an Asus wireless router: WL-520GU... and an AT&T 2Wire for my DSL connection. When I try to browse anywhere, I just get an odd message from the Asus router (in the common Asus broken-English, bad formatting, and awful spelling): http://postimage.org/image/upxrjflcj I guess it's trying to say: Your Asus Router and your 2Wire have the same subnet mask. (It doesn't say if that's good, or bad... but it sounds like they must be different.) but... But for the "solution" it looks like it's trying to say: Your Asus Router and your 2Wire have the same IP address. My Asus has the defaults: 192.168.1.1 and 255.255.255.0 My 2Wire has: 192.168.1.66 I'm not seeing where the conflict(s) could be. The Asus firmware is v3.0.0.14 . None of these problems occur with the old v3.0.0.8 firmware. Any ideas on how to fix this? (PLEASE don't say to run a totally different DD/Tomato firmware because it's "better". I need to fix THIS 1 problem, not try to convince my company to switch everything to an entirely different set of problems.)

    Read the article

  • Catastrophic Failure opening ODBC via Citrix

    - by Joshdan
    We recently had our Citrix server crash unexpectedly. When it came back up, there was a new issue -- every ODBC connection fails with "Catastrophic Failure" (0x8000FFFF). The issue is limited to Citrix / ICA connections; logging in as the same user via RDP works as usual. The following code is my minimal test case (for wscript): ''// test_odbc.vbs strConn = "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\files\;" Set rs = CreateObject("ADODB.recordset") strSQL = "SELECT * FROM myFile.csv" wscript.echo "Press OK to Test" ''// This line breaks over Citrix, but not over Terminal Services ''// ---------------------- rs.open strSQL, strConn, 3,3 ''// ---------------------- wscript.echo rs("a") Any insight would be greatly appreciated. Windows Server 2003 SP1, Citrix MetaFrame Presentation Server 4.0. Clients include at least versions 10.2-11 running on 2000-Vista, OS X. ODBC error happens whether a DSN is used or not, on at least Access, MS-SQL, and CSV. Connections both through the SSL Gateway and directly. There have been a few users actually able to log in without trouble, but I can't pin down anything special about them.

    Read the article

  • Windows Server 2008 DHCP with RRAS

    - by Guillermo Prandi
    I have a Windows Server 2008 R2 which is a member of a domain, but is placed in a remote location. The server is directly connected to Internet. Clients need to access a particular insecure TCP service in this server (ports 9730 and 9731). Since clients have dynamic IP addresses I cannot know in advance, I thought it would be nice to have them connected through a VPN in order to access the insecure service, but ONLY to access that service, like this: Client ------> VPN TUNNEL ------> (Insecure service at Server) | \----> (Normal internet access) I'd enable the insecure ports in the firewall only from VPN accesses. For this I configured RRAS in the server and gave it a static IP address range (172.19.1.2 through 172.19.1.254) to serve the clients. First I thought I could use DHCP to assign the addresses, but I cannot use DHCP in my LAN connection (not allowed by the hosting service). I tried configuring DHCP binding it to a Microsoft Loopback Adapter, but that's not supported as a DHCP source by RRAS. What I want to accomplish is to send specific DHCP options to the client (network mask, routing table, etc.). In particular: Prevent the client from having the server as default router (without changing the client's "use default gateway in remote network"). Have it as a route for the server's internal RRAS address only (172.19.1.1). Prevent the client from using a 255.255.0.0 mask for the 172.19.x.x network (a 255.255.255.0 mask would be better). Can I do that with RRAS only? How? Currently, the only solution I can think of is to use DHCP in the LAN adapter, but filter DHCP packets so they don't reach the provider's network. However, I'm not sure if that will work. Any suggestions are welcomed! Guille

    Read the article

  • Postgresql server will not start

    - by Claudiu
    I'm on Windows 7. I restarted my computer. I then tried to connect to the database and got an error. I don't remember which one in particular but it was some connection issue. I decided to try to restart the server, so I clicked on "Restart server" from the start menu. This blocked. After a few minutes I killed the process and tried again, only to get a "The service is starting or stopping. Please try again later." message. I rebooted the computer again, tried to start again, and got the same error. I killed the pg_ctl process and tried starting it manually, but that didn't work either: C:\Users\DrClaud>cscript "C:\Program Files\PostgreSQL\8.3\scripts\serverctl.vbs" start wait Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. The PostgreSQL Server 8.3 service is starting................................... ....................................... The PostgreSQL Server 8.3 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. The start command returned an error (2) Any ideas?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >