Search Results

Search found 11367 results on 455 pages for 'pages 09'.

Page 24/455 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Iframe pages on Facebook does not show in Internet Explorer 9 - Windows 7 64-bit

    - by Morten
    Have this very irritating problem with Internet Explorer 9 and Facebook. If I go to Facebook and watch a page with iframes (like IFBML pages) it will not show up in Internet Explorer 9. It shows up in Firefox 4 and Chrome 10, but not in Internet Explorer 9. I run Windows 7 64-bit SP1 (danish). The strange thing is that I own three different PC´s and they all run Windows 64-bit SP1 and all of them has this issue. Can´t figure out what causes this issue. I have tried the following: Uninstalled AVG antivirus and installed Microsoft Antivirus - no change Updated Windows with SP1 - no change Updated from Internet Explorer 9 beta to Internet Explorer 9 final Ed. - no change Emptied cache and temp files in Internet Explorer 9 - no change Made www.facebook.com a trusted site in Internet Explorer 9 - no change And a lot of other things I can not remember I guess....but nothing seems to work. As I´m using quite a lot of my working time developing Facebook Fanpages it is frustrating not to be able to test them in Internet Explorer 9. BTW - it is Internet Explorer 9 32-bit - not 64-bit. Any clues?

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

  • wget recursively download from pages with lots of links

    - by Shadow
    When using wget with the recursive option turned on I am getting an error message when it is trying to download a file. It thinks the link is a downloadable file when in reality it should just be following it to get to the page that actually contains the files(or more links to follow) that I want. wget -r -l 16 --accept=jpg website.com The error message is: .... since it should be rejected. This usually occurs when the website link it is trying to fetch ends with a sql statement. The problem however doesn't occur when using the very same wget command on that link. I want to know how exactly it is trying to fetch the pages. I guess I could always take a poke around the source although I don't know how messy the project is. I might also be missing exactly what "recursive" means in the context of wget. I thought it would run through and travel in each link getting the files with the extension I have requested. I posted this up over at stackOverFlow but they turned me over here:) Hoping you guys can help.

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • UIScrollView with pages enabled and device rotation/orientation changes (MADNESS)

    - by jbrennan
    I'm having a hard time getting this right. I've got a UIScrollView, with paging enabled. It is managed by a view controller (MainViewController) and each page is managed by a PageViewController, its view added as a subview of the scrollView at the proper offset. Scrolling is left-right, for standard orientation iPhone app. Works well. Basically exactly like the sample provided by Apple and also like the Weather app provided with the iPhone. However, when I try to support other orientations, things don't work very well. I've supported every orientation in both MainViewController and PageViewController with this method: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } However, when I rotate the device, my pages become quite skewed, and there are lots of drawing glitches, especially if only some of the pages have been loaded, then I rotate, then scroll more, etc... Very messy. I've told my views to support auto-resizing with theView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; But to no avail. It seems to just stretch and distort my views. In my MainViewController, I added this line in an attempt to resize all my pages' views: - (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation { self.scrollView.contentSize = CGSizeMake(self.scrollView.frame.size.width * ([self.viewControllers count]), self.scrollView.frame.size.height); for (int i = 0; i < [self.viewControllers count]; i++) { PageViewController *controller = [self.viewControllers objectAtIndex:i]; if ((NSNull *)controller == [NSNull null]) continue; NSLog(@"Changing frame: %d", i); CGRect frame = self.scrollView.frame; frame.origin.x = frame.size.width * i; frame.origin.y = 0; controller.view.frame = frame; } } But it didn't help too much (because I lazily load the views, so not all of them are necessarily loaded when this executes). Is there any way to solve this problem?

    Read the article

  • jQuery document.ready + Asp.Net ContentPlaceholder cause Visual Studio intellisence problems

    - by Konstantin
    Hi! I want to execute JavaScript when document is ready without much syntax overhead. The idea is to use Site.Master and ContentPlaceholder: <script type="text/javascript"> $(document).ready(function () { <asp:ContentPlaceHolder ID="OnReadyScript" runat="server" /> }); </script> and in inherited pages just write plain code: <asp:Content ID="Content3" ContentPlaceHolderID="OnReadyScript" runat="server"> $("#Login").focus(); </asp:Content> It works fine but Visual Studio complains and gives warnings. Warning in master page is Expected expression at the line <asp:ContentPlaceHolder. In inherited pages warning is Could not find 'OnReadyScript' in the current master page or pages. I tried using Writer.Write in master page to render script tag and wrapping code: <% Writer.Write(@"<script type=""text/javascript"">$(document).ready(function () {"); %> <asp:ContentPlaceHolder ID="OnReadyScrit" runat="server" /> <% Writer.Write(@"});"); %> but page rendering terminates after opening script tag is rendered. Html basically ends with <script type="text/javascript"> How can I make it work?

    Read the article

  • Design Pattern for error handling in ASP.NET 3.5 site

    - by Kevin
    I am relatively new to ASP.NET programming, and web programming in general. We have a site we recently ported from .NET 1.1 to 3.5. Currently we have two methods of error handling: either catching the error during data load on a page and displaying the formatted error in a label on the page, or redirecting to a generic error page. Both of these are somewhat annoying, as right now I'm trying to redesign how our errors are displayed. We are soon moving to Master pages, and I'm wondering if there is a way to "build in" an error handling control. What I mean by this is using a ASP.NET user control I've designed that simply gets passed the error string returned from the server. If an error occurs, the page would not display the content, and instead display the error control. This provides us with the ability to retain the current banner/navigation during an error (which we don't get with the generic error page), as well as keeping me from having to add the control to every aspx page we have (which I have to do with using the label-per-page system). Does something like this make sense? Ultimately I just want to have the error control added to a single page, and all other pages have access to it directly. Is this something Master pages help with? Thanks!

    Read the article

  • Showing an array of certain pages with WP_Query in Wordpress

    - by Ragnar
    What I'm trying to achieve is showing only certain pages in a loop. 3 certain pages, not more, not less. I've tried many things but I just can't complete it. <?php $special_tabs = new WP_Query(array('post_type' => 'page', 'post_in' => array(100,102,104))); ?> What this does, from what I understand, is that it shows an array of pages and then includes those ID's there as well. <?php $special_tabs = new WP_Query(array ('page_id' => 100)); ?> And this shows only ONE certain page, it doesn't support the showing of array of different IDs. I'm sure I am not the first person to have such specific needs and I'm certain there is a relatively simple way to achieve what I'm trying to achieve, but I just can't come up with a solution or to find one. Can anybody please help me with this? Many thanks in advance!

    Read the article

  • Helping Rails Newbies identify version-specific information on web pages

    - by corprew
    I am trying to help some people getting started programming on rails identify which version that advice found on web pages corresponds to, and am seeking advice and/or guides on how to do it so they don't have to rely on me and/or waste time trying outdated advice. Narrative: I am helping some people get up to speed on rails development, and their stock response to running into problems is searching google for advice. They're using 2.3.5 and thinking of moving to 3. The problem they're running into is that there's a lot of advice out there specific to older rails versions (2.2 for example being popular) that isn't identified. I can usually figure out when the pages are old pretty easily, but they can't (yet.) It seems like random web page authors don't identify which version they're using when they're using the current version, and not all pages are dated. This seems to be a general problem that will get worse -- current unadorned advice is usually 2.3.5 and older unadorned advice is 2.2.x at this point, but people are moving / will be moving to version 3 over the next while and newbies will be stuck looking at a bunch of deprecated/incompatible 2.3.x advice without realizing which version it is. Any advice / pointers / telltales?

    Read the article

  • Seperate html pages for each screen in Jquery mobile

    - by vrs
    I am newbie to Jquery Mobile, so far what ever examples i searched contains only one html page for whole application, with multipe div tags where each page/screen is defined as div tag with data-role as page with some header and footers optionally. Based on user actions, we are hiding some div's(pages) and showing only expected page. Also, this multi-page template seems to be standard design, as written by some blogs. Are there any other designing ways? what I would like to have is multipe html pages, for ex one for login, one for home, one for contact etc. Other wise it is difficult to understand/code/debug issues, especially people from Java background like me.So, what I want is some kind of MVC design with JQueryMobile, like each view/screen as sepearate html associated with one js (Controller). Can we have multiple html pages in JqueryMobile app? If possible how to pass data/ maintain session between them? Any samples are most welcome. Thanks In Advance. Note: Also I don't want server side includes, may app contains 10 to 15 screens, each page will make a webservice call and fetch some data and map it to UI.

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Setting up apache to view https pages

    - by zac
    I am trying to set up a site using vmware workstation, ubuntu 11.10, and apache2. The site works fine but now the https pages are not showing up. For example if I try to go to https://www.mysite.com/checkout I just see the message Not Found The requested URL /checkout/ was not found on this server. I dont really know what I am doing and have tried a lot of things to get the ssl certificates in there right. A few things I have in there, in my httpd.conf I just have : ServerName localhost In my ports.conf I have : NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 http </IfModule> <IfModule mod_gnutls.c> Listen 443 http </IfModule> In the /etc/apache2/sites-available/default-ssl : <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> .... truncated in the sites-available/default I have : <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #DocumentRoot /home/magento/site/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <virtualhost *:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ServerAdmin webmaster@localhost <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </virtualhost> I also have in sites-availabe a file setup for my site url, www.mysite.com so in /etc/apache2/sites-available/mysite.com <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> <VirtualHost *:443> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> Thanks for any help getting this setup! As is probably obvious from this post I am pretty lost at this point.

    Read the article

  • What to do with random pages after a 301 redirect?

    - by Alex
    Hello, I did a standard 301 redirect for a domain, but the original domain has about 300 pages that have some strength. It doesn't make sense to make them all point back to the new home page because the individual pages are about some topics. Also, there aren't the same pages in the new domain, so where should the original random pages redirect to? I would like to have them rank for the same topics they used to, but without having the original domain giving them strength, they will just stop ranking and die off. What should I do? Thanks, Alex

    Read the article

  • What is the name for landing pages that are one long page?

    - by blunders
    Really don't see them much anymore, but here's an example of what I mean: From comments: These are "high pressure" sales pages, design to overload the user with information, sell them on the belief that what they're buying is what they need, normally have a lot of testimonials, highlighted text, etc. The pages I'm talking about are not user friendly, they're aggressive sales pitches designed to target users wanting to belief the webpage they just landed on will solve there problems for an "affordable" price. Here's an example: www_landingpagecashmachine_com (remove the underscores, since I'm attempting to avoid linking to a site like that...) Bonus points: if you're able to tell me the name of the guy/company that popularized these types of pages; recall hearing about his company years ago, after he died in a crash while racing on a track with his Ferrari club on the west coast of the US. (Update: Appears Corey Rudl was the guy's name, and his company was called "The Internet Marketing Center." Even with that info, I've still been unable to find the name for these type of pages.)

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • Should I Use WordPress Category Archives or Regular Pages When Considering SEO?

    - by user1151640
    I've built a WordPress site based on posts and category archives (no pages). The menu redirects to different category archive pages that have a description, an image, and the relevant posts. Now that almost everything is finished I've started to worry and wonder if that was a good decision from an SEO standpoint Will Google consider category archives a bad idea for sitelinks compared to using regular pages?

    Read the article

  • Should I disallow(robots.txt) archive/author pages with links already available on the front page? [on hold]

    - by WPRookie82
    I am working on a simple Wordpress blog where when an article is published, it appears on ALL these pages: Homepage - Headline(clickable) + 3-line summary Parent category page - Headline(clickable) + 3-line summary Child category page - Headline(clickable) + 3-line summary Author page - Headline(clickable) sitemap.xml I've been told that I should add all author pages to my robots.txt, under disallow, so as search engine bots do not spider /author/* since all links on these pages are available elsewhere. Is this a good approach or maybe rel=nofollow is better, or maybe I shouldn't worry about this at all?

    Read the article

  • What is an easy way to see how often recently added pages are viewed in google analytics?

    - by cboettig
    Google Analytics makes it very easy to see the number of views of the most-viewed pages, but I cannot figure out how to see the number of views a particular page has received, or the number of views of recently added pages (e.g. blog posts). Is it possible to sort the pageviews list by date the page was added? Can this be done without having to externally create a list of recent pages and use the analytics API?

    Read the article

  • Can I benefit from links to pages on my site which have a `noindex` meta tag?

    - by Noam
    I'm trying to understand if/how I can benefit from people linking to pages on my site which are with pages that have a noindex meta tag. 2 actions I'm considering to perform: Remove the robots.txt disallow to these pages, to make sure inner links get the propagated link juice. Adding a canonical tag to the most similar page that doesn't have a noindex meta tag Are these valid approaches that might help? Any others I should consider?

    Read the article

  • How to batch remove spamming users and pages they created on MediaWiki?

    - by Problemania
    I'm trying to clean up a MediaWiki instance which has been subjected to spamming and vandalism for a period of time. The current status is that there are a large number of users which only created spam pages but typically not altered legitimate pages. And there is only < 10 users which I know are legitimate users and created a small number of legitimate pages. Abstractly, my idea of fixing the messy situation is to find the complete list of users that are not in that small set of legitimate users, and use RenameUser extension to rename them all to a Spammer user, and use Nuke extension to mass delete all pages it created. Any practical advice on how to proceed? Since there are hundreds of spammer users, how do I effectively rename them? It seems Renameuser extension does not support automated batch renaming of users by allowing users to be renamed with a list or file.

    Read the article

  • Does Google counts backlinks from homepage to inside pages?

    - by SharkTheDark
    I have a site with good PR and my inside pages are getting increase of PR, but they don't have links pointing to them, only from my homepage. Does that means that Google counts ALL links on my homepage, including links to inside pages? Does it calculate inside pages PR with one coming from my domain, my homepage, too? Also, if inside pages that got high PR from homepage have link back to homepage, will that increase homepage PR additionally, since those links should count too? By Google PR algorithm formula, by calculations on Wikipedia and Stanford PR algorithm explanation ( which is originally developed by ) it counts those links, and also it counts after-increase backlink again, making few times circle ( it stops because of d ( 0.85 ) factor. ), but it counts them. Does anyone know is this correct?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >