Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 61/86 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • What are useful .screenrc settings?

    - by gyaresu
    Basically like some of my own that I've posted below. I'm looking for added functionality to the programme 'screen'. At the very least have a look at the last line for a fantastic 'menu bar' at the bottom of a screen session. ## gyaresu's .screenrc 2008-03-25 # http://delicious.com/search?p=screenrc # Don't display the copyright page startup_message off # tab-completion flash in heading bar vbell off # keep scrollback n lines defscrollback 1000 # Doesn't fix scrollback problem on xterm because if you scroll back # all you see is the other terminals history. # termcapinfo xterm|xterms|xs|rxvt ti@:te@ # These will let you use bind -c selectHighs 0 select 10 #these three commands are bind -c selectHighs 1 select 11 #added to the command-class bind -c selectHighs 2 select 12 #selectHighs bind -c selectHighs 3 select 13 bind -c selectHighs 4 select 14 bind -c selectHighs 5 select 15 bind - command -c selectHighs #bind the hyphen to #command-class selectHighs screen -t rtorrent 0 rtorrent #screen -t tunes 1 ncmpc --host=192.168.1.4 --port=6600 #was for connecting to MPD music server. screen -t stuff 1 screen -t irssi 2 irssi screen -t dancing 4 screen -t python 5 python screen -t giantfriend 6 these_are_ssh_to_server_scripts.sh screen -t computerrescue 7 these_are_ssh_to_server_scripts.sh screen -t BMon 8 bmon -p eth0 screen -t htop 9 htop screen -t hellanzb 10 hellanzb screen -t watching 3 #screen -t interactive.fiction 8 #screen -t hellahella 8 paster serve --daemon /home/gyaresu/downloads/hellahella/hella.ini shelltitle "$ |bash" # THIS IS THE PRETTY BIT #change the hardstatus settings to give an window list at the bottom of the ##screen, with the time and date and with the current window highlighted hardstatus alwayslastline #hardstatus string '%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}' hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]'

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Internet Radio Station for University

    - by ryan
    I am trying to help my University Student Radio station rethink the setup of the way they stream music, but I have some questions regarding the use of Ubuntu to stream music. Currently, the radio station uses two windows machines: one of which is used to stream the radio station and serve the website, and the other is used by rotating djs to select songs and create playlists. The computer used by djs feeds mono into the sound card of the server and the server streams the feed online. -Ideally I would like to maintain a two-computer setup: One computer as server, and another that is used to select and play music by rotating djs. -I would like to use Ubuntu for the server. -I would like to use Windows for the other machine. -The server should be able to stream song information. First, is there a way to somehow get the song information from an analog feed? Second, what is the best streaming server for radio? I have encountered shoutcast, icecast, and darwin, but I don't know where to begin in attempting to gauge them. Finally, if anyone has any tips or pointers about small internet radio station management/ setup they would be appreciated as this is my first radio station, and I am eager to hear of past experiences.

    Read the article

  • Using IIS7 as a reverse proxy

    - by Eric Petroelje
    I'm setting up a server at home to host a few small websites. One of them is .NET based and needs IIS, the others are PHP based and need Apache. So, I have both IIS 7 and Apache 2.2.x installed on my server with IIS on port 80 and Apache running on port 8080. I would like to set up IIS to work as a reverse proxy, forwarding the requests for the Apache sites to port 8080 and serving the requests for the .NET site itself based on the host headers. Like this: www.mydotnetsite.com/* -> IIS -> serve from IIS www.myapachesite.com/* -> IIS -> forward to apache on port 8080 www.myothersite.com/* -> IIS -> forward to apache on port 8080 I did a bit of googling and it seemed like the Application Request Routing feature would do what I needed, but I can't seem to get it to work the way I want it to. I can get it to forward ALL traffic to the Apache server and I can get it to forward traffic with a specific URL pattern to the Apache server, but I can't seem to get it to forward based on the host headers (e.g. "forward all requests for www.apachesite.com - localhost:8080") So the question is, how would I go about configuring ARR to do this? Or do I need a different tool? I'm also open to using Apache as the reverse proxy and forwarding the .NET site requests to IIS instead if that's easier (running Apache on port 80 and IIS on 8080).

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • Add static route through DHCP

    - by MathieuK
    I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route. So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like: <key>dhcp_option_33</key> <data>[some base64 goes here]</data> .. and restarting the DHCP service. On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key: <key>DHCPRequestedParameterList</key> <array> ... keys snipped for brevity ... <integer>33</integer> </array> .. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule. Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • 403 with Apache and Symfony on Ubuntu 10.04

    - by Dominic Santos
    I'm trying to run symfony on my apache installation (I'm using xampp for the whole package) and it keeps giving me a 403 error every time I try to access my website. I've got vhosts set up with the following: <VirtualHost *:80> ServerName localhost DocumentRoot "/opt/lampp/htdocs" DirectoryIndex index.php <Directory "/opt/lampp/htdocs"> AllowOverride All Allow from All </Directory> </VirtualHost> <VirtualHost *:80> ServerName servername.localhost DocumentRoot /home/me/web/server/web DirectoryIndex index.php Alias /sf "/lib/vendor/symfony/data/bin/web/sf" <Directory "/home/me/web/server/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/lib/vendor/symfony/data/bin/web/sf"> Allow from All </Directory> I've also added "127.0.0.1 servername.localhost" in my hosts file. When I try to access "servername.localhost" it just gives me a 403 error. I've chmod'd 777 the symfony directory and my website directory in my home directory and used './symfony project:permissions' to let symfony check that permissions are set up correctly but still not result. If I move my website directory into "/opt/lampp/htdocs" then it will serve it from there but still has problems access the symfony stuff such as the debug toolbar. Any help would be appreciated.

    Read the article

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • hdfs configuration

    - by Ananymous
    I am a newbie. Trying to setup a hdfs system to serve my data (I don't plan to use mapreduce) at my lab. So far I have read, cluster setup in but I am still confused. Several questions: Do I need to have a secondary namenode? There are 2 files, masters and slaves. Do I really need these 2 files eventhough I just want hdfs? If I need them, what should go in there? I assume my namenode in masters and datanodes as slaves? Do I need slaves nodes What configuration files are needed for namenode, secondary namenode, datanode and client? (I assume core-site.xml is needed for all 4)? In addition, can someone suggest a good configuration model? sample configuration for namenode, secondary namenode, datanode, and the client would be very helpful. I am getting confused because it seems most of the documentation assumes I want to use map-reduce which isn't the case.

    Read the article

  • ProxyPass for specific vhost with mod_rewrite

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >