Search Results

Search found 8469 results on 339 pages for 'office 2011'.

Page 124/339 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • ntpdate -d Server dropped Strata too high

    - by AndyM
    I cannot sync with a NTP source thats coming from an internal router/firewall. Anyone help ? ntppdate -d 192.168.92.82 6 Jun 11:57:30 ntpdate[5011]: ntpdate [email protected] Tue Feb 24 06:32:26 EST 2004 (1) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) 192.168.92.82: Server dropped: strata too high server 192.168.92.82, port 123 stratum 16, precision -19, leap 11, trust 000 refid [73.78.73.84], delay 0.02591, dispersion 0.00002 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 6:28:16.000 originate timestamp: d1972e03.0ae02645 Mon, Jun 6 2011 11:44:19.042 transmit timestamp: d197311b.0ffac1d2 Mon, Jun 6 2011 11:57:31.062 filter delay: 0.02609 0.02591 0.02594 0.02596 0.00000 0.00000 0.00000 0.00000 filter offset: -792.020 -792.020 -792.020 -792.020 0.000000 0.000000 0.000000 0.000000 delay 0.02591, dispersion 0.00002 offset -792.020152 6 Jun 11:57:31 ntpdate[5011]: no server suitable for synchronization found Edit The server I'm being asked to sync to is a firewall , and I've now been told that it is not syncing with anything. So I suppose I need to know if I can force my server to sync with a server that is stratum 16 i.e not sync'd. Is that possible ?

    Read the article

  • pfSense Firewall or Linsys/Cisco router for small offices

    - by Tim Meers
    I'm about to start switching some networks around for multiple small offices. Each office has about 10 to 15 users and 10 to 15 computers. Each office has a spread of generic routers and access points. The routers vary from being used as routers, to just being an access point for wireless. Nothing formal has really ever beem implemented for each of the 10 offices. What I'm wanting is to set up a pfSense box for each office to configure things like: traffic shaping (for VoIP QOS) URL Filtering DHCP static routing multiple VLANs I'll then use some of the existing hardware for wireless. Maybe even integrate the wireless right into the firewall depending on the office layout. So my question, would this be better to do a full blown firewall box, or but a new business class or high end consumer class Linksys router to do the URL filtering, QOS and DHPC? Each option could allow for remote access and VPN for remote maintnance and each would only cost a nominal about of money for something decent, i.e. under $250.

    Read the article

  • playsms sent sms to queue

    - by user512213
    i install playsms in my system as below mentioned way https://github.com/antonraharja/playSMS/wiki/Web-User-Interface-Installation i use kannel as a gateway and kannel running fine i get the following status while i check kannel status in browser. Kannel bearerbox version `1.4.3'. Build `Nov 24 2011 08:02:18', compiler `4.6.2'. System Linux, release 3.2.0-23-generic-pae, version #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012, machine i686. Hostname MSSRF-INCOIS, IP 127.0.1.1. Libxml version 2.7.8. Using OpenSSL 1.0.0e 6 Sep 2011. Compiled with MySQL 5.5.17, using MySQL 5.5.24. Using SQLite 3.7.9. Using native malloc. Status: running, uptime 0d 1h 8m 27s WDP: received 0 (0 queued), sent 0 (0 queued) SMS: received 0 (0 queued), sent 0 (0 queued), store size -1 SMS: inbound (0.00,0.00,0.00) msg/sec, outbound (0.00,0.00,0.00) msg/sec DLR: 0 queued, using internal storage Box connections: wapbox, IP 127.0.0.1 (on-line 0d 1h 8m 2s) wapbox, IP 127.0.0.1 (on-line 0d 0h 18m 11s) SMSC connections: unknown AT2[/dev/ttyS0] (online 4106s, rcvd 0, sent 0, failed 0, queued 0 msgs) But when i try to send sms in playsms means it shows "Your SMS has been delivered to queue " but sms not received .Any thing we missed out while configuration.any one advice me to proceed.

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Reverse ssh tunneling with Tomato

    - by Deivuh
    Since my ISP restricts some incoming connections, I can't access remotely to my home pc. What I'm trying to do is make a reverse ssh connection from my home's router with Tomato firmware to the office computer, so I can access remotely from the office with that open connection. What I'm doing is running the following from the router: ssh -R 12345:localhost:22 oUser@office Then I leave run top open to keep the connection alive. And from my office what I do is run the following: ssh hUser@localhost -p 12345 but I get the following message with verbose on: OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [::1] port 19999. debug1: Connection established. debug1: identity file /home/oUser/.ssh/id_rsa type -1 debug1: identity file /home/oUer/.ssh/id_rsa-cert type -1 debug1: identity file /home/oUser/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/oUser/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host I've password remote access enabled in Tomato's configuration, so I should be able to access without having the public key on *authorized_keys*, but I've even tried adding it and still the same. I've done the same with my home's computer, and it does work perfectly, but it doesn't with the router. Am I doing something wrong? Thanks in advance.

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • Problems setting up a VPN: can connect but can't ping anyone

    - by Fernando
    This is my first time setting a VPN. Clients can connect but can't ping other machines. This is certainly a route problem but i can't find the right way to configure it. Here is a sample example of the two LANS i want to connect: So, i want machines from 192.168.1.0/24 being able to connect with 192.168.0.0/24 as if they were on the same network. For the VPN network, i would like to use the 10.0.0.0/24 range. Here is my server.conf: proto udp port 1194 dev tun server 10.0.0.0 255.255.255.0 push "route 192.168.0.0 255.255.255.0 192.168.0.1" push "dhcp-option DNS 192.168.0.1" push "dhcp-option WINS 192.168.0.1" comp-lzo keepalive 10 120 float max-clients 10 persist-key persist-tun log-append /var/log/openvpn.log verb 6 tls-server dh /etc/openvpn/keys/dh1024.pem ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key tls-auth /etc/openvpn/keys/mykey.key 0 status /var/log/openvpn.stats And one of my clients 192.168.1.2: client dev tap proto udp remote my.no-ip.address 1194 route 192.168.1.0 255.0.0.0 192.168.1.1 3 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.crt" key "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.key" tls-auth "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\mykey.key" 1 ns-cert-type server cipher BF-CBC comp-lzo verb 1 What exactly i am doing wrong? All machines can connect to openvpn but the ping doesn't work. At the client log i see the following error: Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: failed to parse/resolve route for host/network: 10.0.0.1 Thanks!

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Outlook errors and blank address book?

    - by Chasester
    I have a user with a fresh install of Windows 7 64 bit. Office 2010 was installed also. Everything worked great until we installed Adobe Pro 9 and Wordperfect 12. (for outlook we are not using exchange). Then Outlook popped an error saying could not start Outlook. I then modified it to run in compatibility mode and Outlook starts. Then took compatibility mode off and outlook continues to start without difficulty. However, her address book went blank (contacts are all there). In the properties of the contact folder it is checked (and grayed out) to include this folder in the address book, and the Outlook address book is listed when I look at account properties. I tried creating a new profile to no avail. I tried creating a new profile and creating a new pst file - to no avail. I tried uninstalling office, removing the folders inside roaming and local, reinstalled office; got the same could not start Outlook error - did the compatibility mode bit and got it to start - but the Address book continues to be empty. Has anyone run across this before? I'm thinking that there must be some other preferences type folder other than those in Roaming and Local since her signature remained when I reinstalled Office.

    Read the article

  • IP Tables won't save the rule.

    - by ArchUser
    Hello, I'm using ArchLinux and I have an IP tables rule that I know works (from my other server), and it's in /etc/iptables/iptables.rules, it's the only rule set in that directory. I run, /etc/rc.d/iptables save, then /etc/rc.d/iptables/restart, but when I do "iptables --list", I get ACCEPTs on INPUT,FORWARD & OUTPUT. # Generated by iptables-save v1.4.8 on Sat Jan 8 18:42:50 2011 *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [216:14865] :BRUTEGUARD - [0:0] :interfaces - [0:0] :open - [0:0] -A INPUT -p icmp -m icmp --icmp-type 18 -j DROP -A INPUT -p icmp -m icmp --icmp-type 17 -j DROP -A INPUT -p icmp -m icmp --icmp-type 10 -j DROP -A INPUT -p icmp -m icmp --icmp-type 9 -j DROP -A INPUT -p icmp -m icmp --icmp-type 5 -j DROP -A INPUT -p icmp -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -j interfaces -A INPUT -j open -A INPUT -p tcp -j REJECT --reject-with tcp-reset -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -f -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth+ -p icmp -m icmp --icmp-type 8 -j DROP -A BRUTEGUARD -m recent --set --name BF --rsource -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j LOG --log-prefix "[BRUTEFORCE ATTEMPT] " --log-level 6 -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j DROP -A interfaces -i lo -j ACCEPT -A open -p tcp -m tcp --dport 80 -j ACCEPT -A open -p tcp -m tcp --dport 10011 -j ACCEPT -A open -p udp -m udp --dport 9987 -j ACCEPT -A open -p tcp -m tcp --dport 30033 -j ACCEPT -A open -p tcp -m tcp --dport 8000 -j ACCEPT -A open -p tcp -m tcp --dport 8001 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 21 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 3306 -j ACCEPT -A open -p tcp -m tcp --dport 22 -j BRUTEGUARD -A open -s 76.119.125.61 -p tcp -m tcp --dport 22 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Jan 8 18:42:50 2011

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • How do I access the web server on my desktop from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Can I make two wireless routers communicate using the wireless?

    - by Dana Robinson
    I want to make a setup like this: cable modem <-cable- wireless router 1 <-wireless- wireless router 2 in another room <-cables- PCs in another room Basically, I want to extend my network access across the house and then have a bunch of network jacks available for my office PCs. Right now, I have a cable modem going to a wireless router in one room and a PC with a wireless PCI card in it in the office on the other side of the house. I use internet connection sharing with the other PCs in the office. The problem is that ICS is flaky, especially when I switch to VPN on the Windows box to access files at work. I picked up a wireless USB adapter that I thought I could share among the PCs I work on but I'm not very happy with it so I'm going to return it (NDISwrapper support for it is poor). Is this possible? My wireless experience so far has been pretty straightforward so I have no idea what kind of hardware is available. I've looked at network extenders but those just look like repeaters for signal strength. I want wired network jacks in my office.

    Read the article

  • access an IP restricted service from a dynamic IP (Broadband modem) on a windows machine

    - by Joel Alenchery
    Hi, I dont know if this is the correct place to ask this question but here goes .. (please note that I am pretty much a newbie in terms of networking and I work primarily on the windows platform) I have been working on accessing and consuming some web services in C#/ASP.Net, these web services that I consume are IP restricted. Currently they allow access only from my work network (we have a static ip set up through which all our internet requests are routed). Every now and then we have people who go out and about and are stuck with using a usb dongle based internet connection and hence are not able to now access these web services that they are working on. What I would like to do is to provide some way for these remote workers to access the IP restricted web services using the static ip at our office. For example when the remote worker tries to access a service say http://exampleService.com .. the request gets routed to some box at our office and then out to the actual service. That way the service always sees the static ip of the office and not the dynamic ip that the remote user is actually using. I have done a fair bit of googling and its difficult to search for it as most of the results come back for dynamic DNS which is not really what I am looking for. I have also looked at a couple of posts on here namely Accessing IP restricted server from dynamic IP which does provide some insight but the fellow seems to have access to the source that does the ip restriction and is able to change the restrictions. In my case i dont have that access. another one that looked interesting was Static IP for dynamic IP the first answer seems exactly what I need but I dont know how I would go about doing the same on a windows machine. any help would be really appreciated. (am sorry about being soo noob-ish) PS: Right now everyone is using RDC/LogMeIn to access an internet connected machine in the office to manually check the webservice and getting work done. Which is a very tedious process.

    Read the article

  • access an IP restricted service from a dynamic IP (Broadband modem) on a windows machine

    - by Joel Alenchery
    Hi, I dont know if this is the correct place to ask this question but here goes .. (please note that I am pretty much a newbie in terms of networking and I work primarily on the windows platform) I have been working on accessing and consuming some web services in C#/ASP.Net, these web services that I consume are IP restricted. Currently they allow access only from my work network (we have a static ip set up through which all our internet requests are routed). Every now and then we have people who go out and about and are stuck with using a usb dongle based internet connection and hence are not able to now access these web services that they are working on. What I would like to do is to provide some way for these remote workers to access the IP restricted web services using the static ip at our office. For example when the remote worker tries to access a service say http://exampleService.com .. the request gets routed to some box at our office and then out to the actual service. That way the service always sees the static ip of the office and not the dynamic ip that the remote user is actually using. I have done a fair bit of googling and its difficult to search for it as most of the results come back for dynamic DNS which is not really what I am looking for. I have also looked at a couple of posts on here namely http://serverfault.com/questions/187231/accessing-ip-restricted-server-from-dynamic-ip which does provide some insight but the fellow seems to have access to the source that does the ip restriction and is able to change the restrictions. In my case i dont have that access. another one that looked interesting was http://serverfault.com/questions/136806/static-ip-for-dynamic-ip the first answer seems exactly what I need but I dont know how I would go about on a windows machine. any help would be really appreciated. (am sorry about being soo noob-ish) PS: Right now everyone is using RDC/LogMeIn to access an internet connected machine in the office to manually check the webservice and getting work done. Which is a very tedious process.

    Read the article

  • Domino to Exchange 2007 (or 2010) Design Concerns?

    - by NickToyota
    Today we got the executive green light to proceed with changing from a Domino platform to Exchange. The business prefers Exchange for a messaging platform. (even though IMO IBM Domino is fine - if it aint broke, don't fix it but it was not my call). I have been put in charge of Domino to Exchange process goes smoothly as possible. I have also been told to put together costs for this project. I have some questions and concerns re: network design, licensing, costs: The current setup is as follows. 1 HQ office (100 users), 1 secondary office (50 users), 5 branch offices (under 10 users). 5 different email domains Windows Server 2003 functional level with a few 2008 R2 Servers Lotus Domino Notes Servers (one in each office) Ironmail Appliance Public Domino Web Mail server Majority G5+ Proliant Servers Domino Blackberry Enterprise license and server No VoIP phones What are the basic hardware requirements for Exchange 2007 or 2010? Can I simply purchase a single physical server? Will each office require an Exchange server or possibly additional servers (roles)? How is email routed to the smaller branch offices? Standard or Enterprise licenses? The business has been running Domino (messaging and application services) for over 10 years and also want Exchange to support email services, Blackberry, Outlook Web Access, possibly support for iPhone devices. Thank you Serverfault universe.

    Read the article

  • Why would I be getting IXFR and AXFR transfer denied on my DNS server?

    - by danielj
    From everything I've researched and tried, it appears that my named.conf is configured correctly, including the allow-transfer section. Here is a sample of the errors. It is only happening with a couple of my secondary servers, but it is happening for every zone for those servers that are failing. One of the servers is attempting IXFR, the other AXFR. The result is the same: 18-Mar-2011 14:27:51.372 security: error: client 84.234.24.90#59208: zone transfer 'juansgaranton.com/IXFR/IN' denied 18-Mar-2011 14:32:18.015 security: error: client 174.37.196.55#50783: zone transfer 'cheshirecat.net/AXFR/IN' denied Here is the relevant part of named.conf. options { directory "/etc/bind"; pid-file "/var/run/named/named.pid"; files 4096; allow-transfer { 140.186.190.103; 84.234.24.90; 207.246.95.34; 203.20.52.5; 140.186.190.103; 127.0.0.1; 174.37.196.55; }; }; logging { channel "bind" { file "/var/log/bind.log" versions 3; print-time yes; print-severity yes; print-category yes; severity info; }; category lame-servers { null; }; category "default" { "bind"; }; };

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • Windows Server 2003 SBS domain in multiple sites

    - by E3 Group
    We have about 25 employees in our current office and are looking to open up another office in another capital city housing about 15 employees. In our current office, we are running a domain hosted by a 2003 SBS server and I've been tasked by the boss to expand our infrastructure to the new office in the cheapest way possible (cheapest way in the short run that is, because my boss doesn't think more than 6 months ahead). So I'm looking to get a second hand server and have it run Server 2003 Std with exchange server 2003. These are the things that it needs to do: Replicate shared folders that are hosted in the parent LAN. Deliver emails hosted in the parent Exchange Server Somehow link up with the parent domain controller and push the AD to the remote site I'm pretty sure 3 is impossible but the DC would be available if a VPN connection is present, right? On that note, would I be looking at hardware VPN connections? I'm not sure how to deploy the new site as this is my first time doing it and i'm making it especially difficult for myself, seeing as the AD and DC is on an SBS server. Would I first start by establishing a VPN connection and then joining the new server to the domain? Will things 'just work' if I install exchange onto the new server and point outlooks to it? and how would I be able to replicate shared folders?

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >