Search Results

Search found 2349 results on 94 pages for 'webdev webserver'.

Page 22/94 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Rackspace Cloud Sites: Compute Cycles exploding. Very expensive

    - by Jaap
    Since last week my compute cycles (CC) went through the roof (Rackspace Cloud Sites). Normally I stay under the 10,000 cycles per month . Now this month I already have more than 75,000 compute cycles. I don't have more visitors and I did not change anything in the code. I looked in the raw log files, that didn't help either... This explosion of CC already costs me more than 750 USD right now. And still counting. Anyone know what to do? I have contacted Rackspace last week. But still no solution/answer.... Looks like Rackspace is liking the money! Help! Thanks.

    Read the article

  • How GZipped contents are transfered on the web?

    - by PJ
    I heard that static contents like CSS and JavaScript can be better delivered in GZip format. And Content Developer Network (CDN) always does so. However I don't understand how the format works. First when I tried making a gzipped file via command-line. The file extension is .gz. This is different from .css and .js. How do browsers recognize which file is gzipped or not. Second, how browsers "decompress" files? I dragged my index.html.gz onto my browsers. But no one worked. How do such gzipped work in the real world? What do I need to do if I want to serve CSS/JavaScript using Gzipped format.

    Read the article

  • How can I migrate websites between two identical servers?

    - by edude05
    I have two servers that have a nearly identical (software) configuration. We are upgrading the web servers to run windows server 2008 R2, one already is however the main one (that currently has sites) is on WS2008. Now, the old server is ns.mydomain.com and the new server is ns1.mydomain.com. Since dns automatically fails over to ns1.mydomain.com I'd like a way to move all the vhosts to the new server. Is there an automatic way to move / recreate all the vhosts on the new server? I have figured out how to migrate the DNS records already DNS Migration and since both servers are on the same private network migrating the website data isn't a large issue. Every site is running PHP & MySQL and the MySQL server is external so the records won't have to be moved. Thanks

    Read the article

  • How can I find out if a port is opened or not?

    - by Roman
    I have installed Apache server on my Windows 7 computer. I was able to display the default index.php by typing http://localhost/ in the address line of my browser. However, I am still unable to see this page by typing IP address of my computer (neither locally (from the same computer) no globally (from another computer connected to the Internet)). I was told that I need to open port 80. I did it (in a way described here) but it did not solve the problem. First of all I would like to check which ports are opened and which are not. For example I am not sure that my port 80 was closed before I tried to open. I am also not sure that it is opened after I tried to open it. I tried to run a very simple web server written in Python. For that I used port 81 and it worked! And I did not try to open the port 81. So, it was opened by default. So, if 81 is opened by default, why 80 is not? Or it is? ADDITIONAL INFORMATION: 1. In my httpd.conf file I have "Listen 80". 2. This site tells me that port 80 on my computer is opened. 3. I get different responses if I try http://myip:80 and http://myip:81. In the last case browser (Chrome) writes me that link is broken. In the first case I get: Forbidden You don't have permission to access / on this server. 4. IE writes that "The website declined to show this webpage".

    Read the article

  • Web page layout becomes broken when moved to live.

    - by Moses
    I want to preface my question with the fact that I'm only a front-end web developer, so please excuse my gross lack of knowledge in this area. My company has three webservers: one for development (IIS v5), one for staging (IIS v5), and one for live deployment (IIS v6). Staging is an exact mirror of live. When I compare the staging and live web pages side by side in Firefox (3.6), the pages are identical. However, when I compare the staging and live pages with Internet Explorer (8), there are major differences... In staging, the squares for bulleted lists are small. In live, the squares are big. In staging, the borders for tables are thick. In live, the borders are thin. In staging, an ASP generated image is the proper height. In live, the image is cropped at the bottom by about 10px. In the end, the layout on live became broken because of these tiny differences, but why? Does the fact that live is on IIS 6 and staging is on IIS 5 account for the small variance in display? And is there any way I can change this server side? Also, is there any reason why Firefox displays both correctly and IE displays both incorrectly?

    Read the article

  • Experience in migrating from Apache to nginx?

    - by Julien
    I'd like to get some feedback about a migration From Apache to nginx. My goal is to reduce the memory footprint of the web server. Currently, I use the following modules.features on Apache: multiple virtual hosts Server Side Include Fast CGI Please share your experience: problems during migration, benefits after migration (was it worth it?), useful modules for nginx, etc.

    Read the article

  • Hosting a server for websites, ftp and random use at home?

    - by Zolomon
    I'm wondering what's the best option for me if I want to move all my hosted websites (from a hosting company) to a server at my own home? Basically, the needs I have are: be able to host websites using PHP/ASP.NET (haven't really decided yet - both would be preferred!) enable FTP so I can create accounts for my family members to access the server for file handling SSH SSL - for secure connections (this is something you have to buy/apply for per domain, not sure if there are any server side settings that have to be made) be able to stream video remote desktop host home-brew applications that can run as services use either MySQl/SQLite/SQL for relational database storage What should I think of before I buy a server? What hardware will I need, what will limit my server? I basically want to learn networking better as I'm a software and web developer but haven't had the resources to acquire any serious toys until now. At the time of writing, most of my websites have 60 visits/day so I don't suspect them to be very demanding. Is there something I haven't thought of that I should have? What OS would you suggest I run? FreeBSD vs Windows Server vs ?

    Read the article

  • yum list installed including version of all installed packages CentOS 5.4

    - by Andy
    I have a list of packages installed with yum on CentOS 5.4 [root@server ~]# yum list installed ... Installed Packages GConf2.x86_64 2.14.0-9.el5 installed ImageMagick.x86_64 6.2.8.0-4.el5_1.1 installed MAKEDEV.x86_64 3.23-1.2 installed MySQL-python.x86_64 1.2.1-1 installed I would like to download these rpms locally using yumdownloader --resolve MySQL-python-1.2.1-1.x86_64 etc. However the package formatting is different (MySQL-python.x86_64 1.2.1-1 vs MySQL-python-1.2.1-1.x86_64) so I am unable to download them using the above command. I don't want to have to parse the output of yum list installed, and I also don't want to use the contents of /var/log/yum.log* as I'll have to account for erased packages and version discrepancies. However /var/log/yum.log* does have the formatting I require... May 25 14:58:15 Installed: groff-1.18.1.1-11.1.x86_64 May 25 14:58:15 Installed: bzip2-1.0.3-4.el5_2.x86_64 Any suggestions?

    Read the article

  • Change Directory Browsing Page in IIS 7.5

    - by Gabriel Ryan Nahmias
    NOTE: This post is tagged ASP Classic but really that's just one of the languages in which I could write it. I really need assistance with configuring IIS (7.5). I have found many scripts and ideas to effect this but I require that it's not be a "drop-in" replacement, as in it must work globally for any possibly directory from one codebase. Here are several links related to this goal: http://mvolo.com/get-nice-looking-directory-listings-for-your-iis-website-with-directorylistingmodule: Best example of what I want and the one with which I can't seem to follow through. http://www.daleanderson.ca/edb/: This is an example of a "drop-in" replacement (at least it's oriented for that purpose). It still has viable code that could be useful to serve as the main file that processes directory traversal.

    Read the article

  • My website is infected, I restored a backup of the uninfected files, how long will it take to un-mark as dangerous?

    - by Cyclone
    My website www.sagamountain.com was recently infected by a malware distributor (or at least I think it may have been). I have removed all external content, google ads, firefly chat, etc. I uploaded a backup from a few weeks ago, when there was no issue. I patched the SQL injection hole. Now, how long will it take to unmark it as dangerous? Where can I contact google? I am not sure if this is the right place to post it, but since it may have been a server issue I may as well. Can sites inject base64 code via a virus on the whole server, or is it only via sql injection? Thanks for the help, viruses freak me out. Is there an online virus scanner that can scan my page and tell me what is wrong?

    Read the article

  • How to set global PATH on OS X Server 10.6.6?

    - by Adam Lindberg
    I'm running Tomcat on OS X Server 10.6.6 under the normal Web component that comes with the OS. This has worked fine so far, but I need to add some entries to the $PATH environment variable for programs that I want access to from the web server (more specifically, I'm running Hudson under Tomcat which needs access to build tools that I have installed). Tomcat and the Web component seems to run under the user _appserver which has a different $PATH than the administrator account. What's the proper way to add a global entry to the $PATH in OS X Server for the Web component? Preferable it should be done only once so that both the _appserver and Administrator user can access the same $PATH. EDIT: Adding the path to /etc/paths or /etc/paths.d/somefile didn't work. Tomcat and Hudson does still not see those directories.

    Read the article

  • How to use ssl_verify_client=ON on one virtual server and ssl_verify_client=OFF on another?

    - by Alexander Artemenko
    I want to force ssl client verification for on of my virtual hosts. But get "No required SSL certificate was sent" error, trying to GET something from it. Here are my test configs: # defaults ssl_certificate /etc/certs/server.cer; ssl_certificate_key /etc/certs/privkey-server.pem; ssl_client_certificate /etc/certs/allcas.pem; server { listen 1443 ssl; server_name server1.example.com; root /tmp/root/server1; ssl_verify_client off; } server { listen 1443 ssl; server_name server2.example.com; root /tmp/root/server2; ssl_verify_client on; } First server replies with 200 http code, but second returns "400 Bad Request, No required SSL certificate was sent, nginx/1.0.4". Probably, it is implossible to use ssl_verify_client on the same IP? Should I bind these servers to different IPs, will it solve my problem?

    Read the article

  • Windows 2003 - logging off the Administrator account makes web applications inaccessible

    - by Saravana
    Consider a web application installed on a Windows Server 2003 SP2 machine with the admin account. The application is accessible in the server as well as in the network when at least one session of the admin account is logged in. If there are no active sessions of the admin account, the web application is not accessible via the network, nor accessible locally when logged in with another user account. What would cause the web application to be inaccessible there's no Administrator session? Please suggest anything that might help find the solution.

    Read the article

  • Is anyone using Node.js as an actual web server?

    - by Jeremy
    I am trying to convince myself to pick it up and start developing with it, but I want to know if anyone has expected stability issues or anything of the sort. I understand it isn't "production" quality, like Apache or IIS. I figure for a small site, it should be fine (max of 200 concurrent connections). Should I assume this?

    Read the article

  • Do all web caches understand the "Cache-Control" HTTP header?

    - by chris_l
    I'd like to avoid the "Expires" header, and use "Cache-Control" only - or maybe the other way around. The headers will account for a significant percentage of my traffic, so I'd prefer not to "use both". AFAIK, the "Cache-Control" header was standardized in HTTP 1.1, but are there still web caches/proxies in use, which don't understand it? Note: This could help answering a part of my stackoverflow (bounty) question

    Read the article

  • books on web server technology

    - by tushar
    i need to understand the web server technologies as to how are the packets recieved how does it respond and understand httpd.conf files and also get to undertand what terms like proxy or reverse proxy actually mean. but i could not find any resources so please help me and suggest some ebook or web site and by server i dont mean a specific one (apache or nginx..) in short a book on understanding the basics about a web server i already asked this on stackoverflow and webmasters in a nutshell was the answer i got and they said its better if i ask it here so please help me out

    Read the article

  • Can't ping self

    - by Paddy
    I have a wireless internet connection setup on my Mac. (v10.5.6) Am connected to the internet and everything is running smoothly. I recently discovered a quirky behaviour while setting up apache web server. When i typed in my dynamic ip (http://117.254.149.11/) in the webbrowser to visit my site pages it just timed out. In terminal i tried pinging localhost and it worked. $ ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.063 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.044 ms But if i pinged my ip it would just time out. $ ping 117.254.149.11 PING 117.254.149.11 (117.254.149.11): 56 data bytes ^C --- 117.254.149.11 ping statistics --- 10 packets transmitted, 0 packets received, 100% packet loss Pinging any other site works though. I am completely stumped. Any help would be greatly appreciated.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • localhost name error with linux machines

    - by coderex
    Hi, CASE 1: I have a Ubuntu machine with name midhun.local I can access this in http://midhun.local/svn ... But its can't access from other machines(both Windows and Linux) through this host name. But it works with http://192.168.1.192/svn CASE 2: I have a another machine(windows) having the host-name myname:555 In this case i can access https://myname:555/svn from other windows machines with the same URL. But if am trying to access from the a Linux machine it will not work with the same URL instead of that https://192.1.168.111:555/svn will work. How can I solve the problem. I need to access via the same name from cross domain. How is it possible in LAN Thanks in advance!!

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Securing ColdFusion for internet facing server

    - by Goyuix
    What do I need to do to tighten down a ColdFusion server for internet facing apps? The only thing that specifically came to mind was to restrict the CFIDE and JRunScripts directories to a local subnet. Are there settings in the administrator I can tweak to make the applications more secure?

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >