Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 61/86 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Getting 502 instead of 503 when all backend servers are down running HAProxy behind Apache

    - by scarba05
    I'm testing running HAProxy as a dedicated load balancer behind Apache 2.2, replacing our current configuration where we use Apache's load balancer. In our current, Apache only, set-up if all the backend (origin) servers are down Apache will serve a 503 service unavailable message. With HAProxy I get a 502 bad gateway response. I'm using a simple reverse proxy rewrite rule in Apache RewriteRule ^/(.*) http://127.0.0.1:8000/$1 [last,proxy] In HAProxy I have the following (running in default tcp mode) defaults log global option tcp-smart-accept timeout connect 7s timeout client 60s timeout queue 120s timeout server 60s listen my_server 127.0.0.1:8000 balance leastconn server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 Testing connecting directly to the load balancer when the backend servers are down: [root@dev ~]# wget http://127.0.0.1:8000/ test.html --2012-05-28 11:45:28-- http://127.0.0.1:8000/ Connecting to 127.0.0.1:8000... connected. HTTP request sent, awaiting response... No data received. So presumably this is down to the fact that HAProxy accepts the connection and then closes it.

    Read the article

  • Looking for equivalent of ProxyPassReverseMatch in Apache to fix missing trailing forward slash issue

    - by Alex Man
    I have two web servers, www.example.com and www.userdir.com. I'm trying to make www.example.com as the front end proxy server to serve requests like in the format of http://www.example.com/~username such as http://www.example.com/~john/ so that it sends an internal request of http://www.userdir.com/~john/ to www.userdir.com. I can achieve this in Apache with ProxyPass /~john http://www.userdir.com/~john ProxyPassReverse /~john http://www.userdir.com/~john The ProxyPassReverse is necessary as without it a request like http://www.example.com/~john without the trailing forward slash will be redirected as http://www.userdir.com/~john/ and I want my users to stay in the example.com space. Now, my problem is that I have a lot of users and I cannot list all those user names in httpd.conf. So, I use ProxyPassMatch ^(/~.*)$ http://www.userdir.com$1 but there is no such thing as ProxyPassReverseMatch in Apache. Without it, whenever the trailing forward slash is missing in the URL, one will be directed to www.userdir.com, and that's not what I want. I also tried the following to add the trailing forward slash RewriteCond %{REQUEST_URI} ^/~[^./]*$ RewriteRule ^/(.*)$ http://www.userdir.com/$1/ [P] but then it will render a page with broken image and CSS because they are linked to http://www.example.com/images/image.gif while it should be http://www.example.com/~john/images/image.gif. I have been googling for a long time and still can't figure out a good solution for this. Would really appreciate it if any one can shed some light on this issue. Thank you!

    Read the article

  • Internet Radio Station for University

    - by ryan
    I am trying to help my University Student Radio station rethink the setup of the way they stream music, but I have some questions regarding the use of Ubuntu to stream music. Currently, the radio station uses two windows machines: one of which is used to stream the radio station and serve the website, and the other is used by rotating djs to select songs and create playlists. The computer used by djs feeds mono into the sound card of the server and the server streams the feed online. -Ideally I would like to maintain a two-computer setup: One computer as server, and another that is used to select and play music by rotating djs. -I would like to use Ubuntu for the server. -I would like to use Windows for the other machine. -The server should be able to stream song information. First, is there a way to somehow get the song information from an analog feed? Second, what is the best streaming server for radio? I have encountered shoutcast, icecast, and darwin, but I don't know where to begin in attempting to gauge them. Finally, if anyone has any tips or pointers about small internet radio station management/ setup they would be appreciated as this is my first radio station, and I am eager to hear of past experiences.

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • IIS7 Binary Stream Error

    - by WDuffy
    I'm struggling to understand the problem I'm seeing so please accept my apology if the question is vague. I'm running a classic asp app in IIS7 and everything seems to work fine except for one issue that has me stumped. Basically, files can be downloaded from the server which is done using the sendBinary method of Persits ASP Upload component. This component works fine for uploading etc it's just the downloading I have a problem. The strange thing is, I cannot have a pure asp page that serves the binary file. Everything works fine in II6, but in II7 there is a strange problem. For example this does not work. <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> However, if I put anything in front of the asp code the file is served fine. serve this <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> I can also write something to the response stream first and it works fine. <% Response.Write("server this") SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> Does anyone have any ideas of what could be causing this or has anyone ran into a similar situation? I'm sure it has something to do with the setup in IIS 7.

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Add static route through DHCP

    - by MathieuK
    I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route. So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like: <key>dhcp_option_33</key> <data>[some base64 goes here]</data> .. and restarting the DHCP service. On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key: <key>DHCPRequestedParameterList</key> <array> ... keys snipped for brevity ... <integer>33</integer> </array> .. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule. Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?

    Read the article

  • Wildcard DNS entry to match lang subdomain

    - by Adam Benayoun
    Hey, We have a website www.example.com pointing to x.x.x.1 and a system with multiples minisites all having subdomains.examples.com pointing to x.x.x.2 Basically what we have in place is a wildcard DNS entry who could basically match any possible subdomain, once reaching x.x.x.2, the apache vhost would intercept and basically redirect it to a php script who in turn would know what minisites to serve. On www.example.com however, we server contents which are translates in several languages, until few weeks ago you could switch languages by clicking on a flag and you'd be served with the translated content. The only problem is that the URL wouldn't change and SEO wise this isn't the best solution. Now I cannot change the way subdomain are handled (being redirected to x.x.x.2) since we have hundreds, if not thousands of minisites live. I have to come up with a solution to have language.example.com redirecting to x.x.x.1 and then a rewrite rule who would basically rewrite the fake subdomain into a URL in order to pass the parameter of the language to example.com On solution is to list all possible language as DNS entries right before the wildcard DNS entry. The other solution which I am almost sure is not feasible is to have some kind of regex in a DNS entry matching all subdomain with 2 letters ( en|es|fr|cn|cl etc... ) Any ideas?

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • 403 with Apache and Symfony on Ubuntu 10.04

    - by Dominic Santos
    I'm trying to run symfony on my apache installation (I'm using xampp for the whole package) and it keeps giving me a 403 error every time I try to access my website. I've got vhosts set up with the following: <VirtualHost *:80> ServerName localhost DocumentRoot "/opt/lampp/htdocs" DirectoryIndex index.php <Directory "/opt/lampp/htdocs"> AllowOverride All Allow from All </Directory> </VirtualHost> <VirtualHost *:80> ServerName servername.localhost DocumentRoot /home/me/web/server/web DirectoryIndex index.php Alias /sf "/lib/vendor/symfony/data/bin/web/sf" <Directory "/home/me/web/server/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/lib/vendor/symfony/data/bin/web/sf"> Allow from All </Directory> I've also added "127.0.0.1 servername.localhost" in my hosts file. When I try to access "servername.localhost" it just gives me a 403 error. I've chmod'd 777 the symfony directory and my website directory in my home directory and used './symfony project:permissions' to let symfony check that permissions are set up correctly but still not result. If I move my website directory into "/opt/lampp/htdocs" then it will serve it from there but still has problems access the symfony stuff such as the debug toolbar. Any help would be appreciated.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >