Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 373/537 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • Finding Locked Out Users

    - by Bart Silverstrim
    Active Directory up to 2008 network (our servers are a mix of 2008, 2003...) I'm looking for a quick way to query AD to find out what users are locked out, preferably from a batch or script file, to monitor for possible issues with either user accounts being attacked by an automated attack or just anomalies in the network. I've Googled and my Google-fu has failed; I found a query off Microsoft's own knowledgebase that cites a string to use on Server 2003 with the management snap-in's saved queries (http://support.microsoft.com/kb/555131) but when I entered it, the query returned 400 users that a spot-check showed did NOT have a checkmark in the "Account is locked out" box under "account." In fact, I don't see anything wrong with their accounts. Is there a simple utility (wisesoft bulkadusers apparently uses this method behind the scenes, since it's results were also wrong) that will give a count of users and possibly their user object names? Script? Something?

    Read the article

  • Create a web interface that starts VM's on the server side

    - by xdragonforce
    I am creating a service called TorCompute which allows for creating virtual Tor Hidden Service servers on my infrastructure (I say infrastructure, by that I mean just the 3x 32GB RAM machines waiting to be purchased). What I would like to be able to do is for users to log on, access the web interface and spin up a VM with a variable about of RAM/Disk Space. I've written the PHP for logging in and have a nice web interface, just I need the code to actually create them! How would I go about doing this? Thanks, Stewart EDIT: I will run them with VMWare Server, with vmrun

    Read the article

  • Enable a program in Windows to run multiple times?

    - by user135490
    I've got this legacy software that only allows you to run one copy at a time, it detects that you have another session opened and it won't allow you to open a second instance. The problem is this is a cpu intensive program and it only use a single core. Is there any hacks or tweaks so I can trick it and open more than one instance? This would allow me to retire about 5 servers... I'm using Windows 2008 R2. I had to use cff explorer to enable the use more than 2GB RAM as the program crashes when it tries to use more than 2GB.

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • why do I get this mail server configuration error?

    - by Francesco
    <<The configuration of your mail servers and your DNS are not ok! The report of the test is: mail.mydomain.com. -> mydomain.com -> 78.47.63.148 -> static.148.63.47.78.clients.your-server.de Spam recognition software and RFC821 4.3 (also RFC2821 4.3.1) state that the hostname given in the SMTP greeting MUST have an A record pointing back to the same server.>> I have a A Record that points mail.mydomain.com to 78.47.63.148 (which is my given ip address for my vps) All other records are fine, so what's wrong and what record should I create to make it right? Thanks

    Read the article

  • Windows Server 2008 R2 Maximum Processor Limit Confusion

    - by Stevoni
    As I was looking through the Windows Server 2008 R2 specifications, I saw that the maximum supported processors is 64 sockets for Datacenter addition. This puts the maximum number or cores at 256 (if all sockets are quad cores), which I think it's just silly, but whatever. And now the questions: How does one set something like that up? (Obviously not for me, but humor me) Are there multiple dual socket motherboards running in a giant case with a ton of memory? How does the OS see all of the CPUs if they're on different boards? What would be a real world example of a need to have 64 sockets attached to one operating system vs 32 2 socket servers?

    Read the article

  • What should a hosting company do to prepare for IPv6?

    - by Josh
    At the time of writing The IPv4 Depletion Site estimates there are 300 days remaining before all IPv4 addresses have been allocated. I've been following the depletion of IPv4 addresses for some time and realize the "crisis" has been going on for many years and IPv4 addresses have lasted longer than expected, however... As the systems administrator for a small SaaS / website hosting company, what steps should I be taking to prepare for IPv6? We run a handful of CentOS and Ubuntu Linux systems on managed hardware in a remote datacenter. All our servers have IPv6 addresses but they appear to be link local addresses. Our primary business function is website hosting on a proprietary website CMS system. One of my concerns is SSL certificates; at the moment every customer with an SSL certificate gets a dedicated IPv4 IP address. What else should I be concerned about / what action should I take to be prepared for IPv4 depletion?

    Read the article

  • Linux Mint on VirtualBox has no internet connection

    - by Martin
    Physical machine: Mac Mini running OSX 10.7.5 Virtual machine: Linux Mint 15 Cinnamon 64-bit Using VirtualBox. I am trying to make the linux connect to the internet. For some reason I can access the host computer with 192.168.2.100. Its set up to show me a web-page that says "It works". This I can see from the linux. But I cannot reach anything on the internet. Either FireFox says "Unable to connect" or when I run nslookup google.com I get connection timed out; no servers could be reached. My setting on VirtualBox under the Network tab are: Bridge Adapter vnic0 promiscouos Mode:Allow All`. If it makes a difference, the host has a WiFi connection to the home router. What have I missed?

    Read the article

  • Connecting to Aerohive AP's from Laptops running Win. 7 using authentication from a Windows 2008 domain server

    - by user264116
    I have deployed a wireless network using Aerohive access points. 2 of them are set up as radius servers. I want my users to be able to use the same user name and password they use when they log onto our domain. They are able to do this from android devices or computers running Windows 8. It will not work on Windows 7 machines. How do I remedy this situation, keeping in mind that the machines are personal machines not company owned and I will have no way to change their hardware or software.

    Read the article

  • HTTPS vs. VPN for communication between business partners?

    - by Andrew H
    A business partner has asked to set up a site-to-site VPN just so that a few servers can communicate with each other over HTTPS. I'm convinced this isn't necessary, or even desirable. To be fair it must be part of a wider policy, potentially even a legal requirement. However I'd like to convince them to simply offer an IP to us (and us only) and a port of their choosing for HTTPS. Has anyone had a similar experience, or had to come up with a cast-iron argument against a VPN? Allow me to expand a little - we have a web service that initiates a connection to the partner's corresponding service using an encrypted HTTP connection. The connection uses a client certificate to authenticate. The connection is firewalled so only our IPs can contact the service. So why is a VPN necessary?

    Read the article

  • IIS 6.0 mitigating BEAST

    - by D3l_Gato
    Recently, my PCI assessor informed me that my servers are vulnerable to BEAST and failed me. I did my homework and I want to change our webservers to prefer RC4 ciphers over CBC. I followed every guide I could find... I changed my reg keys for my weaker than 128bit encryption to Enabled = 0. completely removed the reg keys for the weaker encryptions. I downloaded IISCrypto and unchecked everything but RC4 128 ciphers and triple DES 168. My webserver still prefers AES-256SHA. Is there a trick in IIS 6.0 to get your webservers to prefer RC4 ciphers that I am not figuring out? It seems like in IIS 7 they made this very easy to fix but that doesn't help me now!

    Read the article

  • SSH Server Timeout on port 22 but not on higher port

    - by mikelberger
    If I run an SSH server on my Windows 2008 server box on the default port 22 I always get Operation Timed Out on the client. If I run it on another port (say 2222) it works fine. I've opened up the firewall. Netstat shows that the server is listening on the correct port. I have used two different Windows SSH servers (freeSSHd and WinSSHD) and they both have the same result. What else could be causing the difference between running the SSH server on port 22 versus port 2222?

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Setting Rails up on a Linode - Nginx Issue

    - by rctneil
    I am extremely new to this so please don't shoot me down: I have set up a Linode running Ubuntu, It is all sort of working except Nginx. I am following this guide: http://rubysource.com/deploying-a-rails-application/ And this for nginx: http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid When I go to my IP, I get a 500 internal server error. I have tried starting nginx and it looks like it starts fine. I run this: ps awx | grep nginx and I get: 308 ? Ss 0:00 nginx: master process /usr/sbin/nginx 2309 ? S 0:00 nginx: worker process 2311 ? S 0:00 nginx: worker process 2312 ? S 0:00 nginx: worker process 2313 ? S 0:00 nginx: worker process 2850 pts/0 S+ 0:00 grep --color=auto nginx I really am not sure what else to do to get it running. Any help? Neil

    Read the article

  • Windows 2008 Group Policy Setting? - Migration Headache

    - by DevNULL
    I have a small domain of users that I just migrated from a linux domain running open-ldap. Our new servers are running Windows 2008 Standard. I've installed Active Directory and everything is working perfectly... except that the initial user privileges is pretty restrictive and I need to loosen it up a bit. For example once they login to their workstations, they can create new files and folders but can not modify existing files or start. I basically want to open it all up except for software installations. Can someone please help with with this migration headache?

    Read the article

  • Connecting to my SMTP server

    - by Joseph Silvashy
    I have a few questions, I just installed SMTP on my Ubuntu server, and I want to know how to connect to it from a different machine... I'm not really clear. I tried: telnet my.servers.ip.address 25 I think it's running on port 25, but I don't know where to find out, its not in the conf file anywhere. Additionally do I need it to be a FDQN? or can I just access my mail server via it's IP address? I know that the service works on the machine because I'm able to echo test | mail -s "test" [email protected] Any help debugging or understanding this would be helpful, thanks guys!

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

  • bad switchs duplicate my ip

    - by tacoen
    I had a large area LAN. There were many switch and AP on it, then somehow I couldn't ping my servers, and it's said that the IP was duplicated. I use arpwatch and found out that one of the switch flip-flop-ing the IP. I isolated that troublesome switch using his mac-address. But, since this a large area LAN... I doubt this will be the last cases. If there any software or hardware that I can use to prevent this kind of error? Sorry for my bad English.

    Read the article

  • How can I measure TCP timeout limit on NAT firewall for setting keepalive interval?

    - by jmanning2k
    A new (NAT) firewall appliance was recently installed at $WORK. Since then, I'm getting many network timeouts and interruptions, especially for operations which would require the server to think for a bit without a response (svn update, rsync, etc.). Inbound SSH sessions over VPN also timeout frequently. That clearly suggests I need to adjust the TCP (and ssh) keepalive time on the servers in question in order to reduce these errors. But what is the appropriate value I should use? Assuming I have machines on both sides of the firewall between which I can make a connection, is there a way to measure what the time limit on TCP connections might be for this firewall? In theory, I would send a packet with gradually increasing intervals until the connection is lost. Any tools that might help (free or open source would be best, but I'm open to other suggestions)? The appliance is not under my control, so I can't just get the value, though I am attempting to ask what it currently is and if I can get it increased.

    Read the article

  • Is it possible to host a web server from behind a NAT

    - by iamrohitbanga
    My PC is behind a NAT router that has a public IP address. If I want to host a website then I believe I need a domain name which I can purchase from some site which would pledge to resolve all DNS requests for that domain name and send the IP address of my NAT router (assuming I do not want to host my domain name on their servers). Now I want to host a web server on my computer. What changes should be done to the NAT router's configuration to forward all HTTP requests for example.com to my PC in the internal network. Is the above strategy correct? Is it commonly used?

    Read the article

  • WGet from one site on a server to another site on the same server

    - by JoshReedSchramm
    Hey all, I've recently been asked to administer a couple ubuntu boxes running web servers. I'm a dev by trade so if this question is fairly noob please forgive. We have about a dozen sites running on this box. 2 of our sites need to talk back and forth over a restful api. Unfortunately we are having issues with the sited connection to each other via wget. When we try and run wget manually from the command line from the server pointing to a site also on that server it hangs and eventually times out. If we do the same thing from outside the server to the same site on the server it works. Is there something that could be preventing sites on the same server from communicating with each other? The same thing happens pinging the site from the server.

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Manage Dell workstations with OpenManage Essentials (OME)

    - by Jonathan Rioux
    How can I manage Dell workstations with OpenManage Essentials ? First, is it possible? Because iv read that only Dell servers can be managed with OME. I would like to inventory each Dell workstations I have in my environment, and be able to see their service tag with warranty expiration, etc. Or which product must I use to do this? There are so much Dell management products like OMCI, OMCC, ITA, etc!! I am so lost with all these products.

    Read the article

  • Which universal or driverless printing solution do you use/recommend?

    - by Matt
    I'm in need of a driverless printing solution for Microsoft Terminal Services 2003/2008. This is mainly to support clients who are connected through broadband into our hosted servers. We were hoping that MSTS 2008 thinprint would be the answer but unfortunately it performs poorly in the print area. The files are too large. I found the following slightly outdated URL: http://www.msterminalservices.org/software/Printing/ This lists a number of products but I have no experience with any of them. I'd like a product that works/easy to install (as our clients are remote and not particularly tech savvy) and ideally I just pay for the server license and not every clients. What is your experience/recommendation and tips you can offer me in regards to TS printing? thanks in advance.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >