Search Results

Search found 1146 results on 46 pages for 'greg rock'.

Page 11/46 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • 301 redirect root domain to www subdomain on godaddy windows hosting account

    - by Greg
    If I type in domain.com and www.domain.com, they both show the same website, but show different urls in the address bar. I'd like visitors and search engines that just type "domain.com" to be redirected to "www.domain.com". I'm using IIS 7 on a godaddy hosting account. How do I redirect all requests for "domain.com" to "www.domain.com"? I have the default DNS setup, "domain.com" as my "A record" and the cname "www" points to my "A record".

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Glusterfs denied mount

    - by greg
    I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80" I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1. However, if I try to mount the volume on Server1, I have an error: Mount failed. Please check the log file for more details. The log gives: [2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0) [2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 24009 (from 0) [2013-11-11 03:34:03.794182] I [client-handshake.c:1614:select_server_supported_programs] 0-ARCHIVE80-client-0: Using Program GlusterFS 3.3.2, Num (1298437), Version (330) [2013-11-11 03:34:03.794387] W [client-handshake.c:1320:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to set the volume (Permission denied) [2013-11-11 03:34:03.794407] W [client-handshake.c:1346:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to get 'process-uuid' from reply dict [2013-11-11 03:34:03.794418] E [client-handshake.c:1352:client_setvolume_cbk] 0-ARCHIVE80-client-0: SETVOLUME on remote-host failed: Authentication failed [2013-11-11 03:34:03.794426] I [client-handshake.c:1437:client_setvolume_cbk] 0-ARCHIVE80-client-0: sending AUTH_FAILED event [2013-11-11 03:34:03.794443] E [fuse-bridge.c:4256:notify] 0-fuse: Server authenication failed. Shutting down. How comes I can mount on one server and not on the other one???

    Read the article

  • Setting Proxy Server for IE 10 on Windows 8 using pac file and Group Policy

    - by Greg Bray
    We currently use group policy to configure a proxy server PAC file for Windows XP and Windows 7 computers on our network. We now are starting to get requests for Windows 8, but have noticed that our current GPO does not work for setting the proxy server on Windows 8 clients or server 2012. Is it possible to do this using a 2008 R2 domain controller or would we need to update our domain to a 2012 server? I found a reference to creating new GPO settings for "Internet Explorer 10 and 11" and vague references to using RSAT on Windows 8 to set IE 10 settings via preferences, but nothing that talks about using group policy to manage proxy settings.

    Read the article

  • IIS7 url rewrite rules

    - by sympatric greg
    In a hosted environment, I will be utilizing subdomains (and virtual directories) for various coding projects. I have a rewrite rule that changes 'subdomain.domain.com/url' to 'domain.com/subdomain/url'. This worked fine, except that the browser couldn't find resources with paths generated by ResolveURL("~/something"). The server was using the Application Path of "/subdomain/" so based on the rewrite rule, the browser's request for "/subdomain/something" was being looked for in "/subdomain/subdomain/something" were it wasn't to be found. either of these urls were valid: http://www.domain.com/subdomain/something http://subdomain.domain.com/something I resolved this by adding a another url rewrite rule to the subdomain: <rule name="RemoveSuperDir"> <match url="subdomain/(.*)" /> <action type="Rewrite" url="{R:1}" /> </rule> So for each subdomain that I might add, I will need to add such a rule. Is there a way to write a single rule at the domain level to resolve this issue?

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Forwarding all mail to a single dev box on IIS via virtual SMTP

    - by Greg R
    I am trying to set up a development environment for our web server. I would like all emails that are relayed by the server go to a specific mailbox, regardless of who they were sent to. For example, some application on the server sends an email to [email protected]. I want that email to go to [email protected]. Is that possible to do with IIS/Virtual SMTP? Is there some other way of doing this? I don't have exchange server running, if that makes a difference. Any help would be greatly appreciated. Thanks a lot!

    Read the article

  • Microsoft Word 2007 Landscape Printing

    - by Greg Ogle
    When printing document in landscape mode using Microsoft Word 2007, it is print portrait and scaling (varies a little per printer). I made a new document with just text and the text is getting chopped even in print preview. It seems rather weird. Am I doing something wrong?

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • Forcing DFS replication on Windows 2003 Web Edition SP2

    - by Greg B
    I have a pair of web servers running Windows Server 2003 Web Edition SP2 (build 3790). There servers are a on a Gigabit LAN and are on the same subnet. I created a new link on the 1st server and added the second box as a target to the link. I configured replication as ring. The 2nd server gets some of the files on the scheduled replication but this varies wildly which makes me think it's not running fully. The folder on server 1 is 1.3GB. Is there some way I can force repication (from a console?) and monitor the progress and see if/when/why it fails.

    Read the article

  • Setup Linksys 3200 remote access

    - by Greg
    I'm trying to setup remote access for my linksys 3200 so that I can configure it through the WAN port. I have turned on remote access, however when I try to connect I get a 404 error. The settings I have are: When I try to access xxx.xxx.xxx.xxx:9999 I just get a 404 error. I have allowed RDP access to a computer behind the router and this works fine on the same IP address. Any idea's on what else I have to do to allow remote management access? UPDATE: I tried changing the port to 80 and it works. Change it back to any other number and it doesn't work. Modem is setup with a DMZ to the router's IP. Why does it only work on port 80? BTW I can't use port 80 because there is a website hosted behind the router.

    Read the article

  • Verification of files on backup media - after the backup

    - by Greg Sansom
    I have a system in place where I back up my work files daily to a portable hard drive. I actually have two portable hard drives - one is stored off-site and I swap them regularly. I also keep my family photos and other historical files backed up, but I only back the photos up occasionally (ie when I have new photos). The backup media is for backup only, and it is unlikely I will ever read the files from the backup media unless a disaster occurs and I lose the master. It worries me that my backed up files could become corrupt without me knowing it. It is also feasible that my master files could become corrupt, and eventually the corrupt files would be replicated to the backup media. I'm currently using Cobian Backup, but I'm open to alternatives. Is there a tool I can use to confirm that the backed up files are identical to the files that were first copied? I know it would be possible to generate a checksum and periodically validate the backup files against the original checksum, but I'm looking for a tool which will do this automatically.

    Read the article

  • Macports Apache not starting at Mac OS X snow leopard boot [closed]

    - by greg
    I've done the launchctl load command, the symlinks point to my /opt/local/etc/LaunchDaemeons/org.macports.apache2/org.macports.apache2.plist, but it never starts. I can start it manually, works fine after that. Just won't load on startup. My server is named in my /opt/local/apache2/conf/httd.conf, I had read that sometimes makes a difference. I've done the launchctl unload and load trick, all with no results. I'm out of ideas.

    Read the article

  • Project Server 2007 Task Updates hangs on 'Loading Grid...'

    - by Greg Buehler
    A strange problem began occurring after applying MOSS2 (KB953334) and the August 2009 cumulative update to our Project server. When a user enters the 'Task Update' screen they are prompted to download a new ActiveX control. Upon refresh, and subsequent access attempts, the user is presented with a blank grid with the caption 'Loading Grid...' We have attempted to fix this issue by updating the 'Trusted Sites' list and changing the security settings according to KB818046. However, nothing seems to definitely fix the problem. Also, when the problem randomly fixes itself, it still occurs when viewing specific projects. Any ideas on a fix?

    Read the article

  • Cannot get Postgresql to start on Ubuntu Hardy

    - by Greg Arcara
    I am getting this error with Postgresql 8.4 on Ubuntu Hardy: $./postgres -D /usr/local/pgsql/data LOG: could not bind IPv4 socket: Cannot assign requested address HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. WARNING: could not create listen socket for "localhost" FATAL: could not create any TCP/IP sockets Here is my hosts file content (been finding a lot of stuff about this so just posting it now: 127.0.0.1 localhost 127.0.1.1 Home-Dev

    Read the article

  • Read Linux-formatted (ext3) EBS volume mounted on Windows Server 2008 instance

    - by Greg Owen
    I've got a Windows Server 2008 R2 instance set up in Amazon EC2. I've also got some Ubuntu instances on the same EC2 account. I'd like to be able to mount the EBS volume from one of the Ubuntu instances onto the Windows instance as an external drive and then access that drive from the Windows instance. I've looked at tools like ext2fsd and ext2 IFS, but these haven't worked (I couldn't get the former to work, and the latter says that it supports Windows 2008 but gives an error when I try to install it, saying that it only supports up to Windows 2003). I know that there are all kinds of tools to view Linux partitions and that there are filesystems that are compatible with both Linux and Windows, but neither of those options works here (I want to be able to attach and detach the Ubuntu volumes on command, rather than have a permanent partition, and Ubuntu EBS volumes are ext3 by default). Anybody know a good tool I should use?

    Read the article

  • Connectivity good between VMs, but no connectivity between host and VM

    - by Greg Sansom
    I have a strange situation where connectivity (ie ping, access shared folders and services) is fine between virtual machines, but not between the host and the virtuals. I've been using Hyper-V almost every day for years and have never had a problem like this. Note the following: The host machine and all virtuals are running Windows Server 2008 r1. All machines can connect to the web, and the gateway router. Subnet mask and gateway reported in ipconfig are identical for all machines. I can access virtuals via the Hyper-V snap-in only. Doea anyone have suggestions about what might be wrong, or what diagnostic steps I can take?

    Read the article

  • Postfix SMTP auth not working with virtual mailboxes + SASL + Courier userdb

    - by Greg K
    So I've read a variety of tutorials and how-to's and I'm struggling to make sense of how to get SMTP auth working with virtual mailboxes in Postfix. I used this Ubuntu tutorial to get set up. I'm using Courier-IMAP and POP3 for reading mail which seems to be working without issue. However, the credentials used to read a mailbox are not working for SMTP. I can see from /var/log/auth.log that PAM is being used, does this require a UNIX user account to work? As I'm using virtual mailboxes to avoid creating user accounts. li305-246 saslauthd[22856]: DEBUG: auth_pam: pam_authenticate failed: Authentication failure li305-246 saslauthd[22856]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] /var/log/mail.log li305-246 postfix/smtpd[27091]: setting up TLS connection from mail-pb0-f43.google.com[209.85.160.43] li305-246 postfix/smtpd[27091]: Anonymous TLS connection established from mail-pb0-f43.google.com[209.85.160.43]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) li305-246 postfix/smtpd[27091]: warning: SASL authentication failure: Password verification failed li305-246 postfix/smtpd[27091]: warning: mail-pb0-f43.google.com[209.85.160.43]: SASL PLAIN authentication failed: authentication failure I've created accounts in userdb as per this tutorial. Does Postfix also use authuserdb? What debug information is needed to help diagnose my issue? main.cf: # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # SMTP parameters smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtp_tls_security_level = may smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: plain login /etc/default/saslauthd START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" /etc/courier/authdaemonrc authmodulelist="authuserdb" I've only modified one line in authdaemonrc and restarted the service as per this tutorial. I've added accounts to /etc/courier/userdb via userdb and userdbpw and run makeuserdb as per the tutorial. SOLVED Thanks to Jenny D for suggesting use of rimap to auth against localhost IMAP server (which reads userdb credentials). I updated /etc/default/saslauthd to start saslauthd correctly (this page was useful) MECHANISMS="rimap" MECH_OPTIONS="localhost" THREADS=0 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" After doing this I got the following error in /var/log/auth.log: li305-246 saslauthd[28093]: auth_rimap: unexpected response to auth request: * BYE [ALERT] Fatal error: Account's mailbox directory is not owned by the correct uid or gid: li305-246 saslauthd[28093]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server] This blog post detailed a solution by setting IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd. Then restart your courier and saslauthd daemons for config changes to take effect. sudo /etc/init.d/courier-imap restart sudo /etc/init.d/courier-authdaemon restart sudo /etc/init.d/saslauthd restart Watch /var/log/auth.log while trying to send email. Hopefully you're good!

    Read the article

  • Can I reduce the CPU speed of my MacBook when on battery?

    - by Greg Hewgill
    I've got a MacBook with a Core 2 Duo CPU. I've got CoreDuoTemp installed which can show the current speed of the CPU. It appears to always show: Mini : 1.0 GHz Maxi : 2.0 GHz Current : 2.0 GHz I believe my laptop would run longer on battery if it were to run at a maximum of 1 GHz. Is there a way to configure this, or is the CPU speed adjustment completely automatic?

    Read the article

  • Hyper-V Manager: right-clicking on remote VM crashes MMC snap-in

    - by Greg Bray
    I have a Windows Server 2008 R2 Enterprise SP1 machine that I log into and use to manage virtual machines running on multiple Hyper-V servers on our domain. Sometimes, when I right-click on a remote VM, the Hyper-V Manager will crash and display the following error message: If I use the Actions menu on the lower right, it works just fine, but for some reason right-clicking causes MMC to stop working. Is there any way to fix this issue? Here are the full details of the error message. Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: Microsoft.Virtualization.Client Problem Signature 05: 6.1.0.0 Problem Signature 06: 4ce7c9e3 Problem Signature 07: 342 Problem Signature 08: 1f Problem Signature 09: System.OverflowException OS Version: 6.1.7601.2.1.0.274.10 Locale ID: 1033 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • How do I recover from a Linux CentOS 4.6 Operating System Crash

    - by Greg Omebije
    Our x86 Linux server running CentOS4.6 has crashed. The machine boots only to the Grub prompt. We have tried using the "rescue mode" to recover the System, but it hasn't worked. How can we fix this problem, so that the machine boots normally? How can we fix this problem to the point were we can recover our files from the server Our Linux Server Configuration: Dell PowerEdge 1950 Intel Xeon 2 HDD (146GB each) 4GB RAM Hardware and Software raid setup CentOS 4.6 We used Sysrecord to boot the computer: the following are the output of fdisk -l Disk /dev/sda: 293.3 GB, 292326211584 255 heads, 63 sectors/track, 35539 Cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17769 142625070 8e Linux LVM

    Read the article

  • How do I get excel to close completely after creating a macro in a personal workbook?

    - by Greg B
    I am using Microsoft Excel 2007 and have several macros in my personal.xlsb workbook, which I use often, so it is very convenient that Excel opens them automatically when it starts. What I don't like is when I click on the "X" in the upper right corner of the window Excel does not exit when I close the last visible workbook. I think that this is because personal.xlsb is still open (though hidden). There are several other questions here on Superuser that have people remove personal.xlsb or move it so it doesn't open on startup (question 65297) or change settings to have only one window show in the taskbar (question 86989). (Sorry there are no hyperlinks--apparently I need more reputation to add additional hyperlinks.) I would like to have personal.xlsb open when I open Excel, have each Excel window show in the taskbar but have Excel exit when I click the "X" on the last workbook that isn't personal.xlsb. Any thoughts on how to achieve this?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >