Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 114/467 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • Why is Routes.rb not loading the IPs from cache?

    - by Christian Fazzini
    I am testing this in local. My ip is 127.0.0.1. The ip_permissions table, is empty. When I browse the site, everything works as expected. Now, I want to simulate browsing the site with a banned IP. So I add the IP into the ip_permissions table via: IpPermission.create!(:ip => '127.0.0.1', :note => 'foobar', :category => 'blacklist') In Rails console, I clear the cache via; Rails.cache.clear. I browse the site. I don't get sent to pages#blacklist. If I restart the server. And browse the site, then I get sent to pages#blacklist. Why do I need to restart the server every time the ip_permissions table is updated? Shouldn't it fetch it based on cache? Routes look like: class BlacklistConstraint def initialize @blacklist = IpPermission.blacklist end def matches?(request) @blacklist.map { |b| b.ip }.include? request.remote_ip end end Foobar::Application.routes.draw do match '/(*path)' => 'pages#blacklist', :constraints => BlacklistConstraint.new .... end My model looks like: class IpPermission < ActiveRecord::Base validates_presence_of :ip, :note, :category validates_uniqueness_of :ip, :scope => [:category] validates :category, :inclusion => { :in => ['whitelist', 'blacklist'] } def self.whitelist Rails.cache.fetch('whitelist', :expires_in => 1.month) { self.where(:category => 'whitelist').all } end def self.blacklist Rails.cache.fetch('blacklist', :expires_in => 1.month) { self.where(:category => 'blacklist').all } end end

    Read the article

  • WordPress website getting hung up for 10-15 seconds

    - by synergy989
    Problem I have a website (which loads fine): http://testupg.videve.com/ Go to the blog section (and it begins to load, gets hung up, then loads): http://testupg.videve.com/blog/ It's all one WordPress install. The only pages that are getting hung up are the blog page itself and any article page. The video pages load fine etc. What Have I done and noticed. If I disable all plugins, the blog page loads fine. I narrowed it down to DisplayBuddy plugins (Featured posts, Carousel, Slider)...as soon as I enable any single one of them, the site loads slow. The thing that doesn't make sense is, I disabled the sidebar (deleted it from the template) and the article page still loaded slow and it has ZERO instances of this plugin. I disable the plugin and the article page loads fine. How on earth can this be? I am hoping this case above can give just a little bit of insight! One other thing. The video pages use a custom taxonomy so I am wondering if its something to do with the default wordpress taxonomy. Any help is GREATLY appreciated, I have been at this thing for hours and it's time to call in some support. Cheers

    Read the article

  • Testing paginated UIScrollView on iPad

    - by Piotr Czapla
    I'm creating a magazine reader (something like iGizmo on iPad). I have two scrollviews one that paginate over articles and second to paginate inside of an article through pages. I'd like to check memory usage of my app after scrolling through 20 pages. To do so I decided to create an automated ui test that scrolls 20 times right and the check the memory foot print at the end of the test. I need that info to have some metrics before I start optimizing the memory usage And Here is the thing: I can't make the ui automation to pass to the second page. My automation code looks like that: var window = UIATarget.localTarget().frontMostApp().mainWindow(); var articleScrollView = window.scrollViews()[0]; articleScrollView.scrollRight(); // do you know any command to wait until first scrolls ends? articleScrollView.scrollRight(); // this one doesn't work I guess that I need to wait for the first scorlling to end before I can run another one, but I don't know how to do that as each page is just an image. (I don't have anything else on pages yet) Any idea?

    Read the article

  • Is there any way to configure what reCAPTCHA is actually displaying?

    - by trejder
    Is there any way to control, what kind of image is displayed to user in reCAPTCHA or what kind of puzzle he/she is required to solve? I noticed at least two significant changes to what reCAPTCHA is serving (and I must admit, that I don't much like these changes): For years reCAPTCHA was serving two words from scanned books and user was required to solve one of them. They were clearly readable (even those "second" ones, that could be ommitted) and with nearly no problem in solving them by a human. For past few month, I noticed a significant change at all of my sites, that are using reCAPTCHA. They started to show combination of computer-generated long numbers string and something, that looks for me as street/house number photographed in Google StreetView. They're even easier to solve, but what is most important -- it started to happen more and more often that user is obligated to solve both of them. Now, I have noticed another change/regression. Now some of my sites remain at so called "level 2" (like above) and some of them started to serve two words again ("level 1"?). And again, there are more and more situations, where solving both words is required. But, what is most important, on this "level" words are nearly impossible to solve (on my old mobile devices with 3.5'' display I need 5-6 attempts to pass on!). They're cluttered, written in some strange font, mostly in italics with a lot of black and white stains or drops on letters etc. Plus: reCAPTCHA stopped to be equal -- some of my pages are still serving "level2" while some of them are "killing" end users with a need to solve "level3". Is there anyway, I can control this -- force it to use only "level2" and on all my pages? (of course, I'm using exactly the same piece of code to serve reCAPTCHA on all my pages) Note, that I'm not asking for something like in this question. I don't want to change what reCAPTCHA shows (to disable words in favor of only numbers for example). I only want to control, which "version" of puzzles (among described above) reCAPTCHA shows and I want to make it equal on all my sites.

    Read the article

  • Sys. engineer has decided to dynamically transform all XSLs into DLLs on website build process. DLL

    - by John Sullivan
    Hello, OS: Win XP. Here is my situation. I have a browser based application. It is "wrapped" in a Visual Basic application. Our "Systems Engineer Senior" has decided to spawn DLL files from all of our XSL pages (many of which have duplicate names) upon building a new instance of the website and have the active server pages (ASPX) use the DLL instead. This has created a "known issue" in which ~200 DLL naming conflicts occur and, thus, half of our application is broken. I think a solution to this problem is that, thankfully, we're generating the names of the DLLs and linking them up with our application dynamically. Therefore we can do something kludgy like generate a hash and append it to the end of the DLL file name when we build our website, then always reference the DLL that had some kind of random string / hash appended to its name. Aside from outright renaming the DLLs, is there another way to have multiple DLLs with the same name register for one application? I think the answer is "No, only between different applications using a special technique." Please confirm. Another question I have on my mind is whether this whole idea is a good practice -- converting our XSL pages (which we use in mass -- every time a response from our web app occurs) into DLL functions that call a "function" to do what the XSL page did via an active server page (ASPX), when we were before just sending an XML response to an XSL page via aspx.

    Read the article

  • global variables doesn't change value in Javascript

    - by user1856906
    My project is composed by 2 html pages: 1)index.html, wich contains the login and the registration form. 2)user_logged.html, wich contains all the features of a logged user. Now, what I want to do is a control if the user is really logged, to avoid the case where a user paste a url in the browser and can see the pages of another user. hours as now, if a user paste this url in the browser: www.user_loggato.html?user=x#profile is as if logged in as user x and this is not nice. My html pages both use js files that contains scripts. I decided to create a global variable called logged inizialized to false and change the variable to true when the login is succesfull. The problem is that the variable, remains false. here is the code: var logged=false; (write in the file a.js) while in the file b.js I have: function login() { //if succesfull logged=true; window.location.href = "user_loggato.html?user="+ JSON.parse(str).username + #profilo"; Now with some alerts I found that my variable logged is always false. Why? if I have not explained well or if there is not some information in order to respond to my question let me know.

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • how to use q.js promises to work with multiple asynchronous operations

    - by kimsia
    Note: This question is also cross-posted in Q.js mailing list over here. i had a situation with multiple asynchronous operations and the answer I accepted pointed out that using Promises using a library such as q.js would be more beneficial. I am convinced to refactor my code to use Promises but because the code is pretty long, i have trimmed the irrelevant portions and exported the crucial parts into a separate repo. The repo is here and the most important file is this. The requirement is that I want pageSizes to be non-empty after traversing all the dragged'n dropped files. The problem is that the FileAPI operations inside getSizeSettingsFromPage function causes getSizeSettingsFromPage to be async. So I cannot place checkWhenReady(); like this. function traverseFiles() { for (var i=0, l=pages.length; i<l; i++) { getSizeSettingsFromPage(pages[i], calculateRatio); } checkWhenReady(); // this always returns 0. } This works, but it is not ideal. I prefer to call checkWhenReady just ONCE after all the pages have undergone this function calculateRatio successfully. function calculateRatio(width, height, filename) { // .... code pageSizes.add(filename, object); checkWhenReady(); // this works but it is not ideal. I prefer to call this method AFTER all the `pages` have undergone calculateRatio // ..... more code... } How do I refactor the code to make use of Promises in Q.js?

    Read the article

  • Proxy cache zone static is unknown

    - by AnApprentice
    I'm working to setup a reverse proxy cache. In nginx.conf I added the following: location /blog { # Reverse Proxy # Cache the Blog Pages from Heroku proxy_cache STATIC; proxy_cache_valid 200 10m; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; rewrite ^/blog$ /; rewrite ^/blog/(.*)$ /$1; proxy_pass http://whispering-retreat-1.herokuapp.com; break; } However when trying to restart nginx I received the following error: $ /opt/nginx/sbin/nginx -s stop nginx: [emerg] "proxy_cache" zone "STATIC" is unknown in /opt/nginx/conf/nginx.conf:182 Any ideas what's the problem is with using STATIC? I just want to cache the blog pages so it doesn't hit heroku every time which is horribly slow. Thanks

    Read the article

  • Server with IIS and Apache - how to SSL encrypt Apache with IIS

    - by GAThrawn
    I have a Windows Server 2003 box already setup and working with IIS 6. IIS is set to serve a site out over both HTTP and HTTPS connections using default ports. For various reasons I need to set Apache up on the same server and it needs to serve its pages to end-users as SSL encrypted HTTPS pages. Neither IIS or Apache are (or are ever likely to be) particularly high traffic or high usage. The way I see it there are two possible ways this could be done. Either export the SSL cert from IIS,set it up in Apache and get Apache to server the HTTPS connections itself over a non-default port. Or use IIS to proxy Apache in some way over it's existing SSL security. What is going to end up easiest to setup, configure, maintain and run? Which is going to work best? Has anyone done this sort of thing before? Any tips or things to look out for?

    Read the article

  • proxy pass for activeMQ

    - by user1172482
    I have a apache server that I'm trying to use for proxy access my activeMQ admin page. I am able to load the inital landing page properly, but I can't seem to load any of the sub-pages (Queues, Connections, etc.). My proxypass rules on the apache server are the following: ProxyPass /foo http://10.5.124.108:8161/admin ProxyPassReverse /foo http://10.5.124.108:8161/admin The activeMQ installation included a activemq-httpd.conf file in /etc/httpd/conf.d/. Proxy connections there are enabled: ProxyRequests On ProxyVia On <Proxy *> Allow from all Order allow,deny </Proxy> ProxyPass /admin http://localhost:8161/admin ProxyPassReverse /admin http://localhost:8161/admin ProxyPass /message http://localhost:8161/admin/send ProxyPassReverse /message http://localhost:8161/admin/send From what I've read the proxypass rules should be recursive (the rule for /foo should also work for /foo/bar). Is there something else that I'm missing here that's preventing me from accessing pages beyond the initial admin landing page?

    Read the article

  • Impersonation on IIS 7.0 passes the machine credentials for Crystal Reports

    - by pknox
    On a 32-bit Windows 2008 server running the Donor2 Application in the Classic .NET Managed Pipeline mode, configured for Windows Integrated Authentication and Impersonation, all of the .NET pages are passing the authenticated user’s credentials [DomainName\UserName]. This is the correct, expected behavior. The Crystal Reports pages, instead of passing the authenticated user’s credentials, are passing the IIS Server’s credentials [DomainName\MachineName$]. One of the very frustrating aspects of this situation is that I have another server which, as far as I can tell, is configured identically. That server, when loading Crystal Reports, is passing the authenticated user’s credentials [DomainName\UserName] as expected. I have obviously missed something, but I have no idea what it could be.

    Read the article

  • Looking for a very bare and basic blog system?

    - by Shedo Chung-Hee Surashu
    Does anyone know of a user-hosted blog which can be an alternative to WordPress only, it should only have the bare necessities of a blog. I'll just take it from there. It should only have the following: Admin Account (for posting, editing, etc) Archive System Posting System with character limitation (For the Read More links.) Accept Comments from other users (only requires the user's name and email and / or website, then the actual comment). Pages (Allows me to create pages for custom content.) The reason I want this is because WordPress is already too bloated up that there are a ton of features that I don't need. I'd mostly be satisfied with a blogging system that has the above feature-set and I'll just work my way to add my own feature as I require it along the way.

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Can't access internet using a domain joined computer outside the domain environment

    - by Mike Walsh
    We had an unused box at work so took it home. It had been joined to the domain and hasn't been unjoined. When I try to use it at home (logging in with a local admin account) I can't seem to access internet pages. It gets correct IP and gateway for the local network and correct DNS servers for the home ADSL connection. I can happily ping the home router (which doesn't have any tricky firewall settings). Can't seem to ping outside, get any DNS to resolve, or (obviously) get any web pages. Is there some problem here with this having been joined to the domain?

    Read the article

  • World Wide Publishing Service very slow to restart on IIS? Why?

    - by StacMan
    Every now and then, we look to restart our IIS server by restarting the "WWW Publishing Service". On most systems this would usually only take a minute or two, but this can often take up to 10 minutes to stop the server and restart. Does anyone know any way to work out what is taking so much time to stop the service? After reading up around the net I've learned this may be due to locked resources used by users and/or lower-level IIS cached items being recycled. But I cant seem to work out where I can validate if this is true on not. Also if anyone knows how to fix or speed this up, that would be excellent. We have a large codebase which contains over 280 aspx pages across the site. Our main domain contains about 100 aspx pages whilst the subdomains contain 15 or 20 each. Some specs: Code is written in C#; runs on .Net framework 2.0 Server Windows Web Server 2008 IIS7; DB running SQL Server 2008 Standard

    Read the article

  • What's the best ramdisk for Windows?

    - by Dan Fabulich
    Just Googling for "windows ramdisk" returns a lot of stuff; some free, some paid. Can someone recommend a good one? (I'm willing to pay, but only if it's really worth paying for.) Here's what I'm looking for: Works on XP and Vista Supports 64 bit and 32 bit I'd like it to support more than 2GB of space (is that even possible on a 32 bit operating system?) I think I want something that can avoid swapping the ramdisk pages. (Though swapping would give me more free space; maybe I'll want to swap the ramdisk pages eventually? Is this configurable?)

    Read the article

  • nginx + php-fpm “504 Gateway Time-out” after compiling with curl support

    - by Brian
    We recently switched to managing php with php-fpm. It was working great, but is now giving me issues. The most recent change was to install libcurl-devel and re-compile php (5.3.3) using --with-curl. Now I'm getting 504 timeouts with nginx and the pages won't load. HTML pages load fine, phpinfo() loads as well. Tried backing out the changes and re-compiling without curl support, but still not having any luck. Also tried adjusting request_terminate_timeout per some of the other posts here on SF without change. This is on a test machine that has no other clients hitting it. I also tried switching to unix socket instead of tcp--same result. What am I missing here? Am I barking up the wrong tree with curl?

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • Apache misbehaving (returning 404s)

    - by OC2PS
    CentOS 6.4 64-bit Apache 2.4.6 PHP-FPM 5.5.4 Homepage from root loads fine http://csillamvilag.com But all other pages return 404 (CMS is WordPress). I am also able to access and log into WordPress backend. Additionally, Menalto Gallery 3 seems to be loading ok http://csillamvilag.com/kepek/ but all OpenCart pages return 404 http://csillamvilag.com/shop/ or http://csillamvilag.com/shop/hu/ Apache is running as user apache. All relevant WordPress and OpenCart files are owned by user apache. I have a suspicion that it might be a rewrite issue, but I checked .htaccess for both WordPress and OpenCart, and they look ok. e.g. WordPress/root .htaccess is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule>

    Read the article

  • A PDF viewer for large margins in fullscreen

    - by jmn
    I am looking for a way to pleasantly read PDF files on my widescreen (22" 1680x1050) monitor. My problem with all pdf the PDF-viewer applications I have tried is that they do not handle wide and high margins well. If I go to fullscreen mode in my viewer and zoom in so that the extra margins are cropped, I can view the pages nicely, the annoyance however is that I have to reposition the pages every time I navigate to another page. I am sure there must be a way to make a PDF viewer that can solve this problem and perhaps there is one you know of? I am aware of something called PDF Reflow in Acrobat Reader but that only works with certain specific (tagged) files. I want a PDF viewer with a smarter zoom/next page function or an automatic margin-crop function. Is there such a thing?

    Read the article

  • how to print labels from UPS printer on UPS website

    - by paynes_bay
    I have several computers, in my office, that have UPS printers attached to them. On most of these computers if you go to ups.com, login, and print a shipping label out, it prints out just fine. The website somehow selects the appropriate printer and prints to it. It doesn't present a prompt, asking you to select the printer, the number of pages, etc - it just prints it. Only problem: there's one computer on which it's not doing this and I don't know why. I can see the printer in Printers and Faxes and can print out test pages from the Properties tab, so the printer clearly works - it just isn't printing out from UPS's website. Any ideas?

    Read the article

  • Apache won't start after creating symbolic link

    - by Carlin
    I'm installing apache for the first time and trying to display some webpages on localhost. Apache's default path for serving web pages is /var/www/html/ but I don't have permissions to write there. Rather than change ownership of the entire directory, I decided to get rid of the /html/ folder in /var/www/ and created it in my home directory. Then I made a symbolic link ln -s /home/me/html/ /var/www/ hoping Apache would serve web pages from my home directory while keeping the default path and following the symbolic link to my home directory. When I go to start the apache service with service httpd start I get Job failed. See system journal and 'systemctl status' for details.

    Read the article

  • My virtualhost not working for non-www version

    - by johnlai2004
    I have a development web server (ubuntu + apache) that can be accessed via the url glacialsummit.com. For some reason, http://www.glacialsummit.com serves pages from the /srv/www/glacialsummit.com/ directory, but http://glacialsummit.com serves pages from the /var/www/ directory. Here's what some of my virtualhost config files look like filename: /etc/apache2/sites-enabled/glacialsummit.com <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined </VirtualHost> <VirtualHost 97.107.140.47:443> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/localcerts/www.glacialsummit.com.crt SSLCertificateKeyFile /etc/ssl/localcerts/www.glacialsummit.com.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName project.glacialsummit.com ServerAlias www.project.glacialsummit.com DocumentRoot /srv/www/project.glacialsummit.com/public_html/ ErrorLog /srv/www/project.glacialsummit.com/logs/error.log CustomLog /srv/www/project.glacialsummit.com/logs/access.log combined </VirtualHost> ## i have many other vhosts that work fine in this file filename /etc/apache2/sites-enabled/000-default <VirtualHost 97.107.140.47:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> filename: /etc/apache2/ports.conf NameVirtualHost 97.107.140.47:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> How do I make http://glacialsummit.com serve web pages from /srv/www/glacialsummit.com/public_html just like http://www.glacialsummit.com?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >