Search Results

Search found 56465 results on 2259 pages for 'http status 401'.

Page 521/2259 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Basic 301 Redirection Help

    - by Marc
    I am trying to learn redirection for a WordPress site of my own. I am testing the concept of redirecting a single WordPress post by using a dummy site. However, it doesn't seem to be working for me. I am trying to redirect www.perfectmatchmaker[dot]org/finding-the-right-matchmaker to www.perfectmatchmaker[dot]org/finding-the-perfect-matchmaker I read that using the following is how to do this: Redirect 301 /old.html http://www.you[dot]com/new.html So this is what my .htaccess file currently looks like: # Use PHP5 as default AddHandler application/x-httpd-php5 .php # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Redirect 301 /finding-the-right-matchmaker.html http://www.perfectmatchmaker.org/finding-the-perfect-matchmaker.html I've also tried removing the ".html". The redirection of the URL is finally working, but the URL shows no posts available. If I try to redirect the other post on the site by adding the following on the next line of the .htaccess file, I get an error that there is a "redirect loop" occurring. redirect 301 /find-love-and-your-perfect-match-through-the-use-of-a-match-maker http://www.perfectmatchmaker.org/find-love-and-your-perfect-match Any help you can provide me would be much appreciated. Thanks! Marc

    Read the article

  • DirectAccess Server firewall rules blocking ports

    - by StormPooper
    I have configured DirectAccess on my Server 2012 Essentials box and most of it works great - I can remotely access the server via RDP and the default IIS website on port 80. However, I can't access anything that uses other ports. For this example, the Team Foundation Server website. The only way to access it is by accessing http://localhost:8080/tfs on the server directly - even when using http://servername:8080/tfs or http://192.168.1.100:8080/tfs won't work. I've tried adding the ports to the NAT exceptions using Set-NetNatTransitionConfiguration –IPv4AddressPortPool and while that has allowed some ports used internally (Deluge, for example) it hasn't allowed me access to the URL. I think I've narrowed it down to the "DirectAccess Server Settings" Group Policy that is created when configuring DirectAccess. When I disable the link for this GPO, the TFS site works again, but the default IIS site stops working (but RDP still works). I already have rules in the firewall on the server for TFS and before enabling this Group Policy (so before configuring DirectAccess) I could access both sites. Does anybody have any suggestions for things I can change to allow access to both? I've uploaded the full GPO report and my Remote Access Configuration Summary for more details.

    Read the article

  • Gitlab and Nginx not loading gitlab

    - by paperids
    I have just installed gitlab and nginx on Ubuntu LTS 12.04 using this guide: http://blog.compunet.co.za/gitlab-installation-on-ubuntu-server-12-04/ I installed this on another server last night and had absolutely no problems with it (sort of a test run to see how long it would take to get going). I am not getting any errors when restarting gitlab or nginx with /etc/init.d and my error logs are empty. The only thing I know of to go on is the vhost config: upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.sock$ } server { listen localhost:80; server_name gitlab.bluringdev.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback$ try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is r$ # then the proxy pass the request to the upsteam (gitla$ location @gitlab { proxy_redirect off; # you need to change this to "https", if you set "ssl" $ proxy_set_header X-FORWARDED_PROTO http; proxy_set_header Host gitlab.bluringdev.com:80; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } If there's any other information that would be helpful, just let me know and I'll get it up asap.

    Read the article

  • How can I capture output from LFTP? (Output not written to STDOUT or STDERR?)

    - by jondahl
    I would like get access to progress information from lftp. Currently, I'm using curl like so: curl http://example.com/file -o file -L 2> download.log This writes curl's progress information to the download.log file, which I can tail to get real-time progress. But the same approach doesn't work with lftp, either with stdout or stderr. I end up with an empty download.log file, until the transfer is complete. lftp -e 'get http://example.com/file;quit' 2> download.log lftp -e 'get http://example.com/file;quit' 1> download.log When I don't redirect output, I see progress on the screen. When I do redirect output, I stop seeing progress on the screen, but nothing shows up in download.log. After the file transfer is complete, I see the final result, like this - but nothing before: 97618627 bytes transferred in 104 seconds (913.1K/s) Is lftp doing something unusual with its output - printing to screen without printing to stdout/stderr? Are there other ways of capturing screen output than redirecting stdout/stderr?

    Read the article

  • Oracle® Database Express Edition roblem running on Win Server 2003 with MS SQl Server 2008 [closed]

    - by totoz
    Hi I have on Win Server 2003 MS SQL Server 2008 and also IIS is running. I try learn Oracle, so first I installed Oracle® Database Express Edition. I tried connect viac web browser on Oracle Server on url http://127.0.0.1:8080/apex I got this expcetion in browser The page cannot be found The page you are looking for might have been removed, had its name changed, or is temporarily unavailable. Please try the following: Make sure that the Web site address displayed in the address bar of your browser is spelled and formatted correctly. If you reached this page by clicking a link, contact the Web site administrator to alert them that the link is incorrectly formatted. Click the Back button to try another link. HTTP Error 404 - File or directory not found. Internet Information Services (IIS) Technical Information (for support personnel) Go to Microsoft Product Support Services and perform a title search for the words HTTP and 404. Open IIS Help, which is accessible in IIS Manager (inetmgr), and search for topics titled Web Site Setup, Common Administrative Tasks, and About Custom Error Messages. Why I can not log on Oracle Home Page?

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • SuperMicro BMC on OpenSuSE Linux --cannot access from LAN

    - by Kendall
    Hi, I have an (old) SMC-001 IPMI device on an (old) X6DVL-EG2 motherboard. My problem is that I cannot access the BMC from LAN. I'm also getting some interesting output from ipmitool. First, the setup. I enable Console Redirection in the BIOS, turn BIOS Redirection after POSt to "disabled". I then modprobe'ed for ipmi_msghandler, ipmi_devintf and ipmi_si. I then found ipmi0 under /dev. So far so good. Since I want console redirection over serial, I modified /boot/grub/menu.lst: http://pastebin.com/YYJmhusQ I then modified "/etc/inittab" as follows: S1:12345:respawn:/sbin/agetty -L 19200 ttyS1 ansi Networking I set as following, using "ipmitool" ipaddr: 192.168.3.164 netmask: 255.255.255.0 defgw: 192.168.3.1 The above are correct for my environment. To test it I do: ipmitool -I open chassis power off which responds by powering off the machine. When I to access from another computer on the network, however, I get an error message: host# ipmitool -I lanplus -H 192.168.10.164 -U Admin -a chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status "Admin" seems to be a valid user name: host# ipmitool -I open user list 1 2 Admin true false true USER The interesting output from ipmitool I initially mentioned: host # ipmitool -I open lan set 1 access on Set Channel Access for channel 1 failed: Request data field length limit exceeded Also, newload4:/home/gjones # ipmitool channel info 1 Channel 0x1 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : session-less Active Session Count : 0 Protocol Vendor ID : 7154 Get Channel Access (volatile) failed: Request data field length limit exceeded The output of "ipmitool -I open lan print 1" is here: http://pastebin.com/UZyL6yyE Any help/suggestions is greatly appreciated; I've been working with this thing for a few hours now with no success.

    Read the article

  • Bind9 configured to start at boot, has to be started manually

    - by antik
    I've configured bind9 on my system and it works great when it runs. It's currently configured to be run at runlevel 2 by setting: $ sudo update-rc.d bind9 enable 2 This appears to have done its work: $ tree -f /etc/rc?.d | grep -e ".*bind9$" |-- /etc/rc0.d/K85bind9 -> ../init.d/bind9 |-- /etc/rc2.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc3.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc4.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc5.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc6.d/K85bind9 -> ../init.d/bind9 Booting the system, I believe I am at runlevel 2: $ runlevel N 2 Given the above configuration, when the system is rebooted, bind does not come up. Only on occasion, for some reason, can I resolve hostnames immediately after startup. Far more often than not however, I cannot. I can interrogate the service's status: $ sudo /etc/init.d/bind9 status * could not access PID file for bind9 When the service doesn't start, I can start it successfully via a terminal by issuing $ sudo /etc/init.d/bind9 start And it works great from then on. Loopback configuration: $ ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1872 errors:0 dropped:0 overruns:0 frame:0 TX packets:1872 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:220205 (220.2 KB) TX bytes:220205 (220.2 KB) Do I have my startup misconfigured? (I'm used to Gentoo so Ubuntu's model is still a little new to me) I'm not seeing any log indication of a failed attempt to start at boot in syslog. Is there someplace else I should be looking? What else should I look into to get bind working at startup?

    Read the article

  • First Request to IIS Express Fails with 503 Service Unavailable, Second Succeeds

    - by Chris Moschini
    Each time I start my ASP.Net MVC 3 app from Visual Studio 2010, IIS Express launches and IE spins waiting. The request fails with HTTP 503 Service Unavailable. I hit Refresh in IE, and the request succeeds. All subsequent requests succeed until I stop debugging. The next time I go to start debugging, the first request fails again. Has anyone else experienced this? In IISExpress\applicationhost.config I have: <site name="ProjectName" id="6"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="c:\users\chris\dropbox\code\2010\SolutionName\ProjectName" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:laptop" /> </bindings> </site> I have this in my hosts file: 127.0.0.1 laptop And my Project is set to start with IIS Express, with Project Url set to: http://laptop It's very strange that only the first request fails, perhaps as though Visual Studio isn't waiting long enough for IIS Express to start? Is there some way to make it wait? Stopping debugging, making a change, and then starting again is one of the most common tasks I do so adding another step to get there is pretty annoying.

    Read the article

  • ip-up does not trigger when using built-in cisco vpn on mac osx lion

    - by Yasser Sobhdel
    I am using Cisco VPN client over lion and I want to make the ip-up and ip-down work. There is no sign of any action taken when I connect or disconnect this VPN connection. I really doubt whether the syntax has been changed or even this kind if connection is triggering the ip-up. Logically, it must be set over ppp but when using the following codes and instructions on them, there is no sign of any output in the log file: http://www.macfreek.nl/mindmaster/Modify_PPTP_Routing_Table http://www.aidanfindlater.com/use-vpn-for-specific-sites-on-mac-os-x Going for error, which there is no sign of it, using the following page: http://hints.macworld.com/article.php?story=20060616150640529 I couldn't find the /var/log/ppp/vpnd.log log file. Also the files are given full permission 0755 or a+x or even 777 using the following command: sudo chmod a+x /etc/ppp/ip-up Any clue on how to debug this would be appreciated. I am totally confused, netstat -rn -f inet doesn't show the routes. Even when the routes are added manually, closing the VPN connection does not run the ip-down and the routes must be deleted manually.

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • How do I locate the app generating this network traffic?

    - by Christopher Bartels
    I don't know what this process is doing on my computer. I run Windows 7 Professional w/ all its updates running current non-free antivirus. I only see it in Resource Monitor, where you can see the Network Service process connected to bitum.nnov.ru. When my PC's network traffic generating apps are idle, this process is using the most of all the idle processes using the network. Screenshot hosted here: http://sss.proinbox.com/bitum-nnov-ru.jpg Does anyone recognize this? The page source mentions a control port & a stream port: Page Source for http://bitum.nnov.ru : <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>DVR WebViewer</title> <meta http-equiv="Content-Type" content="text/html; charset=euc-kr"> </head> <body topmargin="0" leftmargin="0"> <OBJECT classid="clsid:EE479A40-C128-40DD-93DA-000556AF9607" codebase="CtrWeb.cab#version=1,0,2,2" width=875 height=585 align=center hspace=0 vspace=0 > <param name="CmdPort" value="5920"> <param name="StreamPort" value="5921"> </body> </html> When I google this page's title, I see a number of other domains that host the same page. Whois: domain: NNOV.RU nserver: ns.kis.ru. nserver: ns.nnov.ru. 78.25.80.210 nserver: ns1.kis.ru. nserver: ns2.kis.ru. state: REGISTERED, DELEGATED, VERIFIED org: "Agentstvo Delovoj Svjazi", Ltd registrar: RU-CENTER-REG-RIPN admin-contact: https://www.nic.ru/whois created: 1996.10.23 paid-till: 2012.11.01 free-date: 2012.12.02 source: TCI Last updated on 2012.06.16 04:20:46 MSK

    Read the article

  • CORS Fails on CloudFront Distribution with Nginx Origin

    - by kgrote
    I have a CloudFront distribution set up with an Nginx server as the origin (a Media Temple DV server, to be specific). I enabled the Access-Control-Allow-Origin: * header so fonts will work in Firefox. However, Firefox throws a CORS error for fonts loaded from this CloudFront/Nginx distribution. I created another CloudFront distribution, this time with an Apache server as the origin, and set Access-Control-Allow-Origin: * also. Firefox displays fonts from this origin without issue. I've set up a demo page here: http://kristengrote.com/cors-test/ When I perform a curl request for the same font file from each distribution, both files return almost exactly the same headers: Apache Origin Nginx Origin ——————————————————— ——————————————————— HTTP/1.1 200 OK HTTP/1.1 200 OK Server: Apache Server: nginx Content-Type: application/font-woff Content-Type: application/font-woff Content-Length: 25428 Content-Length: 25428 Connection: keep-alive Connection: keep-alive Date: Wed, 11 Jun 2014 23:23:09 GMT Date: Wed, 11 Jun 2014 23:15:23 GMT Last-Modified: Tue, 10 Jun 2014 22:15:56 GMT Last-Modified: Tue, 10 Jun 2014 22:56:09 GMT Accept-Ranges: bytes Accept-Ranges: bytes Cache-Control: max-age=2592000 Cache-Control: max-age=2592000 Expires: Fri, 11 Jul 2014 23:23:09 GMT Expires: Fri, 11 Jul 2014 23:15:23 GMT Access-Control-Allow-Origin: * Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, HEAD Access-Control-Allow-Methods: GET, HEAD Access-Control-Allow-Headers: * Access-Control-Allow-Headers: * Access-Control-Max-Age: 3000 Access-Control-Max-Age: 3000 X-Cache: Hit from cloudfront X-Cache: Hit from cloudfront Via: 1.1 210111ffb8239a13be669aa7c59f53bd.cloudfront.net (CloudFront) Via: 1.1 fa0dd57deefe7337151830e7e9660414.cloudfront.net (CloudFront) X-Amz-Cf-Id: QWucpBoZnS3B8E1mlXR2V5V-SVUoITCeVb64fETuAgNuGuTLnbzAhw== X-Amz-Cf-Id: E2Z3VOIfR5QPcYN1osOgvk0HyBwc3PxrFBBHYdA65ZntXDe-srzgUQ== Age: 487 X-Accel-Version: 0.01 X-Powered-By: PleskLin X-Robots-Tag: noindex, nofollow So the only conclusion I can draw is that something about Nginx is preventing Firefox from recognizing CORS and allowing the fonts via CloudFront. Any ideas on what the heck is happening here?

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • Same domain only : smtp; 5.1.0 - Unknown address error 530-'SMTP authentication is required

    - by user124672
    I have a very strange problem after moving my netwave.be domain from WebHost4Life to Arvixe. I configured several email adresses, like [email protected] and [email protected]. For POP3 I can use mail.netwave.be, a mailserver hosted by Arvixe. However, for SMTP I have to use relay.skynet.be. Skynet (Belgacom) is one of the biggest internet providers in Belgium and blocks smtp requests to external mailservers. So for years I've been using relay.skynet.be to send my messages using [email protected] as the sender. The worked perfectly. After moving my domain to Arvixe, this is no longer the case. I can send emails to people, no problem. I have received emails too, so I suspect that's ok too. But I can't send emails from one user of my domain to another user. For example, if I send a mail from [email protected] to [email protected], relay.skynet.be picks up the mail just fine. A few seconds later, I get a 'Delivery Status Notification (Failure)' mail that contains: Reporting-MTA: dns; mailrelay012.isp.belgacom.be Final-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 (permanent failure) Remote-MTA: dns; [69.72.141.4] Diagnostic-Code: smtp; 5.1.0 - Unknown address error 530-'SMTP authentication is required.' (delivery attempts: 0) Like I said, this only seems to be the case when both the sender and recipient are adresses of a domain hosted by Arvixe. I have serveral accounts not related to Arvixe at all. I can use relay.skynet.be to send mail to [email protected] using these accounts. Likewise, I can use relay.skynet.be to send mail from [email protected] to these accounts. but not from one Arvixe account to another. I hope I have clearly outlined the problem and someone will be able to help me.

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • need help with automating a CMD java tool whcih qurries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process ? the java tool runs from the CMD with the following: C:java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)472906(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • OpenVPN Bridge LAN-to-LAN Configuration?

    - by Shad Reese
    I'm trying to configure an OpenVPN bridge LAN-to-LAN setup. Currently, I have the OpenVPN bridge Server/Client setup up running. On the server-side my br-lan interface has tap0, eth0, and wlan0 in the bridge group. On the client-side the br-lan interface has eth0 and wlan0 in the bridge group, the client tap0 is outside of the br-lan group. Currently the two bridge groups are connected via the wlanO interfaces (server-side is the Access Point - AP and the client-side is the wireless client). My goal is to connect the two bridge groups with a wireless VPN pipe. My network configuration: Server: br-lan: 10.4.96.50 Client: br-lan: 10.4.96.75 tap0: 10.4.96.100 <---- issued by the VPN server. Unfortunately, I'm stuck with using a bridge instead of a routed OpenVPN setup. My question is how (if possible) do I add the client tap0 interface to the client bridge group, as to ensure all traffic between the server/client bridge groups is using the VPN pipe? SERVER CONFIG FILE. config openvpn sample_server # Set to 1 to enable this instance: option enable 1 option port 1194 option proto udp option dev tap0 option key /etc/easy-rsa/keys/server.key option dh /etc/easy-rsa/keys/dh1024.pem option ifconfig_pool_persist /tmp/ipp.txt option server_bridge "10.4.96.50 255.255.255.0 10.4.96.100 10.4.96.200" list push "redirect-gateway local def1" list push "dhcp-option DNS 10.4.96.14" option duplicate_cn 1 option comp_lzo 1 option max_clients 100 option log /tmp/openvpn.log option verb 3 CLIENT CONFIG FILE: config 'openvpn' 'sample_client' option 'enable' '1' option 'client' '1' option 'dev' 'tap' option 'proto' 'udp' list 'remote' '10.4.96.50 1194' option 'status' /tmp/openvpn-status.log option 'log' /tmp/openvpn.log option 'ca' '/etc/easy-rsa/keys/ca.crt' option 'cert' '/etc/easy-rsa/keys/client.crt' option 'key' '/etc/easy-rsa/keys/client.key' option 'comp_lzo' '1' option 'verb' '5' Thanks in advance,

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • OpenAM throwing 302 0 behind haproxy, nginx

    - by Travis
    I'm having some issues with my deployment and was wondering if you can help. My set up is as follows: 2 OpenAM servers are set up behind a load balancer (HAproxy). The load balancer is set up behind two reverse proxies (nginx). The two reverse proxies are ser up behind another load balancer (haproxy). So a request will go through Haproxy nginx Haproxy openam I can access the OpenAM web console through the reverse proxies without a problem. Everything works fine at this level. However when I access openam through the load balancer in front of the reverse proxies Openam throws a 302 error. The funny thing is however I can access the host/openam/UI/Login and login successfully. I even get the cookie and have access to my apps that are set up. However immediately after the login OpenAM throws a 302 redirect. I'm puzzled and cannot figure out what is going wrong. Does anyone have any idea? My config files are below: nginx config : server { listen 443; server_name oamlb1; location / { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; } location /openam { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host oamlb1.mydomain.com:8080; } } haproxy config : (This file is for the servers. The file for the reverse proxies is idenical except it points to the reverse proxies) listen http_proxy :8090 mode http balance roundrobin option httpclose option forwardfor server webA oamserver1.mydomain.com:18080 option forwardfor Thanks

    Read the article

  • Unable to connect Xend with virt-manager

    - by Majid Azimi
    I have installed debian 6.0.1a. I have install all XEN stuff. including xen kernel, libvirtd, ... but when i want to connect xend, virt-manager shows me this: Verify that: A Xen host kernel was booted The Xen service has been started details: Unable to open connection to hypervisor URI 'xen:///': unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 971, in _try_open None], flags) File "/usr/lib/python2.6/dist-packages/libvirt.py", line 111, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied here is uname output: Linux debian 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux and also xend and libvirtd is runnig: root@debian:/home/mazimi# /etc/init.d/libvirt-bin status Checking status of libvirt management daemon: libvirtd running. root@debian:/home/mazimi# /etc/init.d/xend start Starting Xen daemons: xenstored xenconsoled xend. permissions for livbirt-sock: root@debian:/home/mazimi# ls -alih /var/run/libvirt/ total 12K 671017 drwxr-xr-x 3 root root 4.0K Apr 15 13:54 . 654083 drwxr-xr-x 18 root root 4.0K Apr 15 13:54 .. 670901 srwxrwx--- 1 root libvirt 0 Apr 15 13:54 libvirt-sock 670928 srwxrwxrwx 1 root libvirt 0 Apr 15 13:54 libvirt-sock-ro 670870 drwxr-xr-x 2 root root 4.0K Apr 15 02:34 qemu and also we have group named libvirt in /etc/group When running libvirtd with verbose mode it behaves kind of stange: root@debian:/var/log/libvirt# /usr/sbin/libvirtd --verbose 17:26:55.841: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address 17:26:56.128: warning : lxcStartup:1900 : Unable to create cgroup for driver: No such device or address and waits infinitely.

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • SSI includes not working on Debian with Apache

    - by Mike
    I'm trying to get SSI to work on Debian running Apache, however the .shtml files are not being parsed. From a PHP file with phpinfo() I can see that the following show up in the loaded modules section: mod_mime_xattr mod_mime mod_mime_magic In /etc/apache2/mods-enabled/mime.conf I have (among other things): AddType text/html .shtml AddOutputFilter INCLUDES .shtml In /etc/apache2/sites-enabled/domain.com.conf (for the virtual host in question) I have: <Directory /home/username/public_html> Options +Includes allow from all AllowOverride All </Directory> and for good measure, I added the following as well: <Directory /> Options +Includes </directory> In the user's .htaccess file, I tried adding: Options +Includes AddType text/html shtml AddHandler server-parsed shtml Nothing seems to work. How can I even debug this? Edit: Here is the output of ls /etc/apache2/mods-enabled/ in case this helps actions.conf dav_svn.load proxy_balancer.load actions.load deflate.conf proxy.conf alias.conf deflate.load proxy_connect.load alias.load dir.conf proxy_http.load auth_basic.load dir.load proxy.load auth_digest.load env.load python.load authn_file.load fcgid.conf reqtimeout.conf authz_default.load fcgid.load reqtimeout.load authz_groupfile.load mime.conf rewrite.load authz_host.load mime.load ruby.load authz_user.load mime_magic.conf setenvif.conf autoindex.conf mime_magic.load setenvif.load autoindex.load mime-xattr.load ssl.conf cgi.load negotiation.conf ssl.load dav_fs.conf negotiation.load status.conf dav_fs.load php5.conf status.load dav.load php5.load suexec.load dav_svn.conf proxy_balancer.conf

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >