Search Results

Search found 1087 results on 44 pages for 'serving'.

Page 19/44 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Apache won't start after creating symbolic link

    - by Carlin
    I'm installing apache for the first time and trying to display some webpages on localhost. Apache's default path for serving web pages is /var/www/html/ but I don't have permissions to write there. Rather than change ownership of the entire directory, I decided to get rid of the /html/ folder in /var/www/ and created it in my home directory. Then I made a symbolic link ln -s /home/me/html/ /var/www/ hoping Apache would serve web pages from my home directory while keeping the default path and following the symbolic link to my home directory. When I go to start the apache service with service httpd start I get Job failed. See system journal and 'systemctl status' for details.

    Read the article

  • Nginx static files exclude one or some file extensions

    - by Evgeniy
    I'm serving up a static site via nginx. location ~* \.(avi|bin|bmp|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /var/www/html1; access_log off; expires 1d; } And my goal is to exclude requests like http://connect1.webinar.ru/converter/task/. Full view is like http://mydomain.tld/converter/task/setComplete/fid/34330/fn/7c2cfed32ec2eef6788e728fa46f7a80.ppt.swf. Despite the fact these URLs ends in such a format they are not static, but fake script requests, so I have a problems with them. What is the best way to do this? How can I add an exclusion for this URL or maybe I can to exclude the specific file exptension (.ppt.swf, pptx.swf) from the list of this Nginx location? Thanks.

    Read the article

  • Proxying MMS Stream on a LAN

    - by Matthew Iselin
    A variety of users on our LAN would like to listen to an MMS stream, and in the interest of conserving bandwidth (and because our WAN connection is not fast at all) I was wondering if it was possible to set up a service which proxies the stream from the WAN and provides it to LAN computers, thus only downloading the stream once and then distributing it to clients. Any ideas? I have a Linux box serving as our LAN-WAN router, so it'd be ideal if something could sit on it and proxy the stream, but I also have Linux and Windows workstations. A free solution would be preferred.

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • Multiple WAN interfaces in same subnet on Sonicwall NSA220?

    - by Ttamsen
    (eta salutation, which keeps getting eaten.) Hi, all. I see a bunch of related questions, so I'm hesitant to ask, but: I have a situation where I have a Sonicwall NSA220 serving as firewall/router for two internal subnets to two external WAN connections. In some locations this is two separate ISPs. In others, it's the same ISP but with multiple circuits. The problem is that one ISP has been unable to provide unique subnets for each WAN interface. Is there any possibility that I might be able to bond the two WAN interfaces into a single virtual interface, and then use source-routing to get internal subnets communicating out the appropriate physical interface? Or even just use traffic-shaping to give each internal network appropriate shared bandwidth? I haven't found anything in the docs, but it seemed like it might be worth asking. Thanks for any help! -Steve.

    Read the article

  • Re-streaming RTMP stream

    - by Yvan JANSSENS
    I have a set of local RTMP stream servers in my network, but I want them to be reachable outside. The bandwidth is too narrow to serve multiple clients on the streamservers of my network, so the idea is to pull the local RTMP streams on a computer serving as a gateway, which pushes them on his turn to a hosted streaming provider. It is not possible to let the sources of the stream push their stream directly to the server outside due to network policy restrictions. Scheme of what I'm trying to accomplish: Internal network | External network ------------ ------------ ----------------------- | internal | <---- | Gateway | ------> | streamserver outside| | streams | ------------ ----------------------- ------------ | ^ | | | ----------- | | clients | | ----------- My question now is: which application which can pull a live stream from an RTMP source (Flash Media Server) and push it to another one (Flash Media Server at hosting provider).

    Read the article

  • php not working with apache install

    - by fivelitresofsoda
    hi, I have a centos server. I have installed apache 2.2.17 from a .tar.gz file. I've also install php 5.3 from the ius repository. That worked fine and apache is functioning and so is php 5.3. However, when i put a phpinfo.php file in the directory that apache should be serving files from, it doesn't work. I can only seem to get php to work if i use yum install httpd which installs an older vesion of apache. Any ideas? Thanks.

    Read the article

  • How do I configure IIS so my Web.config is determined by URL?

    - by Scott Stafford
    I am running a test rig with IIS6 serving an ASP.NET (and Sharepoint) web site. We have several clients, and so we have custom root Web.config files for each client. For this test rig, I want to just serve straight from the Trunk of our source control. However, I'd like to be able to select different root Web.config files based on the URL (or port or whatever) I use to access the site, so I can just use one checkout of the source and run all the sites with their appropriate settings. Is this possible?

    Read the article

  • What Defines an AD Object as "Inactive"

    - by Malnizzle
    I am going to be using some DSQUERY/DSMOVE scripts to clean up my AD Domin. One option is to move inactive objects to a OU that has restrictive GPOs applied to it. Something like: DSQUERY computer -inactive 10 | DSMOVE -newparent <distinguished name of target OU> My question is what value defines an object, both user and computer, as "inactive" for a period of time? Is it the last time a computer was logged on to for computer accounts, and for users is it the last time that the user account logged on to a computer? But what if, say for example, I had a web server that wasn't rebooted and or logged into for a couple of months but remain powered on and functioning as normal, would it be defined as "inactive" where as technically it's still serving web pages and so on? Thanks for the help!

    Read the article

  • bad printer isolation on print server or better way?

    - by Joseph
    I have noticed that when a printer or driver screws up on a Windows server it usually locks up or kills the print spooler and everyone can't print until it is fixed. Usually we have to put the troublesome printer on another server so when it fails, it doesn't take the whole group with it. That is assuming we ever figure out which printer is the problem. Is there a way to have it so that one bad apple doesn't ruin the bunch? Even if it is another form of printer serving, that would work as long as it's not hard for the user to find a printer and install drivers.

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • Separating two networks

    - by Farhan Ali
    I have two routers, R1 and R2. R1 (a stock linksys router running dd-wrt) is connected to internet and is serving internet to a network of 5 devices/PCs running a DHCP server, with a network of 192.168.1.0/24. R1 also serves internet services to R2. R2 (a ubuntu server 12.04) gets internet from R1. R2 has 3 PCs attached to it, runs a DHCP server with a network of 172.22.22.0/24. My requirement is that the clients on both sides should not talk to each other at all – with the exception that R1 clients may access the R2 router through its IP of 192.168.1.x. At the moment, R2 clients are able to ping R1 clients, which is unacceptable, whereas R1 clients cannot ping R2 clients, which is OK. I believe iptables could be set up but I don't know how.

    Read the article

  • when to use squid on server side?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • Is there a way to batch create DNS slave zones on a new slave DNS server?

    - by Josh
    I currently have a DNS server which is serving as a master DNS server for a number of our domains. I want to set up a brand new secondary DNS server. Is there any way I can automatically have BIND on the new server act as a secondary for all the domains on the primary server? In case it matters, I have Webmin on the primary server. I believe Webmin has an option to create a zone as a secondary on another server when creating a new master zone on one server, but I don;t know of any way to batch create secondary zones for a number of existing master zones. Maybe I'm missing something. Is there a way to "batch create" DNS slave zones on a brand new slave DNS server for all the DNS zones on an existing master?

    Read the article

  • solr reverse proxy Apache2

    - by Steven
    I am trying to setup Apache2 as Reverse Proxy for solr. Apache and Solr are on the same machine. Apache is serving other stuff as regular web server,too. solsearch config file in /etc/apache2/config.d/ # Proxy specific settings ProxyRequests Off ProxyPreserveHost Off <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass /solrsearch http://localhost:8983/solr/collection1/browse ProxyPassReverse /solrsearch http://localhost:8983/solr/collection1/browse Now trying [http://localhost/solsearch] gives me the first page of [http://localhost:8983/solr/collection1/browse], but with broken layout (like css missing). Result: error.log of apache: File does not exist: /var/www/solr, referer: [http://192.168.1.150/solrsearch]

    Read the article

  • How does one guarantee a remote client the same local IP address every time when connecting to a VPN?

    - by Joe Carroll
    I need to configure a VPN for secure remote access to a PACS serving DICOM radiological images. The DICOM standard requires that any clients accessing the PACS must be using a fixed IP address that is pre-registered in PACS. I haven't implemented this solution before and would appreciate any guidance. I believe it should be possible to use RADIUS on the server to authenticate users connecting to the VPN and with it assign each user their own specific local subnet IP address, which would be registered with the PACS. The server runs Windows Server 2003 R2 Enterprise Edition SP2 and the VPN device is a FortiGate 60C. The What would be the best and/or simplest way to set this up?

    Read the article

  • How to set up an SSL Cert with Subject Alternative Name

    - by Darren Oster
    To test a specific embedded client, I need to set up a web server serving a couple of SSL (HTTPS) sites, say "main.mysite.com" and "alternate.mysite.com". These should be handled by the same certificate, with a Subject Name of "main.mysite.com" and a Subject Alternative Name of "alternate.mysite.com". This certificate needs to be in an authority chain back to a 'proper' CA (such as GoDaddy, to keep the cost down). My question is, are there any good tutorials on how to do this, or can someone explain the process? What sort of parent certificate do I need to purchase from the CA provider? My understanding of SSL certificates is limited, but as Manuel said in Fawlty Towers, "I learn...". I'm happy to work in Windows (IIS) or Linux (Apache) (or even OSX, for that matter). Thanks in advance.

    Read the article

  • JSP / Tomcat / Apache setup overview on Fedora Core

    - by Richard T
    Hi Folks, For someone with so much Java experience, boy do I feel clueless - thanks in advance for your help in my grocking the present (Feb, 2010) JSP environment. Here's what I am hoping to learn: Do I understand correctly that most people use Apache to "front-end" their Tomcat servers, such that Apache "talks" directly to web clients and "proxies" Tomcat servers? Do I understand correctly that Apache isn't capable of serving JSP directly but requires a server (like Tomcat)? Is there an RPM package for Fedora Core so I don't have to build one myself? Or, does Fedora Core's package installer do a good job on this one from source code? (Some do, some don't!) While I'm here asking questions; Does Tomcat come with a working example that one can start hacking on as a way to get started quickly? If not, got a good suggestion? Thanks folks, RT

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • How to choose size for a cloud server (rackspace)

    - by Emil
    We're going to test the rackspace cloud next week to see how it's working with our web app. It's a LAMP environment with a lot of MySQL databases. How do I choose the "right" server size? On Rackspace I can choose slices with the memory of 256, 512, 1024, 2048, 4096 etc. Right now we don't have a lot of traffic (approx. 1000 visitors/day) but I thought the whole "cloud" idea was to not be limited and auto scale. Update: What I'm looking for is now a specification of what I need. I know it's too complex. I'm looking for examples, case studies etc. It would be interesting to hear something like "Yes we're serving 10 000 daily requests without spikes on a LAMP stack with only one slice on with 2 GB RAM".

    Read the article

  • Apache only transferring partial content from a Samba share

    - by thaBadDawg
    I have an Apache server running on CentOS 5.3. It currently hosts 12 sites with no known issues. (I say this to point out that up to this point my Apache installation has performed flawlessly) I'm adding a new site where the DocumentRoot of the new VirtualHost is a Samba share. When at the command line of the server I can cp video.m4v ~ and the whole file is copied properly to my home directory. But when I try to access the file from IE/Firefox/Safari/Chrome it only passes back a partial result of 33k. The same thing is happening with my image and audio files. If I make the files local to the server by copying them from the share and then serving them up then the files transfer. Any ideas?

    Read the article

  • How would I put together a site requiring several TB? [closed]

    - by acidzombie24
    Lets say I have a site with unmetered 100MBPS bandwidth (i assume its bits?) and the ram i require. Most plans i see offer HDD that hold 250gb and 1TB. But what happens if i compile/generate enough data that i require 10tb or 25tb? (I'd likely have two servers but...) I wouldn't be serving all of that data (well not to the public) so CDN wouldn't make sense. What do i do in this scenario? Do I need to get a custom plan from a hosting provider? (if so how do i find them?) Are there services that allow me to mount remote drives (that sounds wrong unless its a CDN so maybe not). Are there host that deals specifically with unmetered bandwidth and provides lots of disk space? Math says ~1TB is the most i'll ever need but if i happen to need more i'd like to know my options.

    Read the article

  • php processes owned by ppid 1 after X amount of time

    - by Kristopher Ives
    I have a CentOS server running WHM that uses FastCGI (mod_fcgid) running PHP 5.2.17 on Apache 2.0 with SuExec. When I start Apache it begins fine and serving requests. If I run ps on the terminal as root I see the php processes and they are owned by their httpd parent processes. After X amount of time - different from time to time, not much longer than a few hours typically - the server will begin spawning PHP jobs owned by the init process ID (1) Example of good listing: 12918 18254 /usr/bin/php 12918 18257 /usr/bin/php 12918 18293 /usr/bin/php 12918 18545 /usr/bin/php 12918 18546 /usr/bin/php 12918 19016 /usr/bin/php 12918 19948 /usr/bin/php Then later something like: 1 6800 /usr/bin/php 1 6801 /usr/bin/php 1 7036 /usr/bin/php 1 8788 /usr/bin/php 1 10488 /usr/bin/php 1 10571 /usr/bin/php 1 10572 /usr/bin/php The php processes running owned by (1) never get cleaned up. Why would these processes be running? We don't use setsid or anything beyond basic PHP in the code this server is running. Cheers & Thanks

    Read the article

  • Activesync/OWA Desktop Client

    - by prestomation
    At my company we have Exchange 2k3 with OWA being public, serving up Activesync and webmail. There is no pop3 or imap support from our admins. Outlook 2k3's RPC over HTTP is also disabled Is there a desktop client that can connect to Activesync or OWA? If my ipod touch can connect to activesync, why can't my pc? I'd preferably like a linux daemon that could simply forward emails to my gmail address, but I guess I'll take what I can get. Thanks EDIT: In case it was not clear, our Exchange server is hidden completely behind a firewall, and a second exchange server has only activesync and https ports opened to the world.

    Read the article

  • Windows/Samba connection error

    - by Gomibushi
    I have a Linux fileserver serving up /home for linux and windows users. I was able to connect from my windows client, but not from a DC. Then suddenly I could connect from the DC too. The linux servers run Centrify clients, and as such are part of the domain. All on same subnet. This is what the the log.smbd says, repeatedly: [2010/02/11 11:25:57, 0] lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.200.3. Error = Connection reset by peer On Windows it appeared as an "unknown error". EDIT: the error code is "0x80004005". We are developing a system depended on the samba share, and are worried this will appear again. It would be nice to pin point the root of this. Any ideas what this might be? Places to look?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >