Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 256/476 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Restricting A Directory Through .htaccess

    - by Whitechapel
    I'm trying to put all of my FTP accounts into a folder on /public_html/ftp and password protect it so search bots can't crawl their private files. I'm also trying to redirect all site traffic from the non-www to www. I keep getting 500 errors when accessing the site, and I need to point it to www.vivalanation.com/ftp to www.vivalanation.com/ftp/, because the /ftp just errors out, you need the trailing slash. Here is my .htaccess in the /public_html/ftp folder: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] AuthName "FTP Access" AuthType Basic AuthUserFile /home1/vivalst/.htpasswds/public_html/ftp/passwd Require valid-user I created a passwd file in /.htpasswds/public_html/ftp And here is my basic .htaccess in the root of /public_html/: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • Customer escalated to a claim without sending the item back? [closed]

    - by kavoir.com
    She claims that she has sent the item back but for over 1 month I haven't received it. I don't know where I can find the tracking number so I don't know if she really sent it or not. Now she has escalated the dispute to claim and PayPal is asking me for information. The reason is "Not as described". So how do I respond to this? I mean, in the dispute, we agreed that I'll issue a full refund as soon as I receive the package she return to us but we never received this package that she claimed to have sent back. Now she's escalating this to a claim and PayPal is asking me for documents. How can I provide any documents that would prove she hasn't sent the package? Thanks!

    Read the article

  • What is the most secure environment for multiple CMS sites? [closed]

    - by Brian Gulino
    I wish to run about 50 Joomla or WordPress low-traffic websites on 1 server, or part of a server. Each website will be managed by its own, naive owner who will have be able to access the Joomla or Wordpress backend of the website. I am concerned about security and isolation as my users will periodically get into trouble by not protecting their sites properly. Two alternatives I know of exist: Run one Linux system with multiple websites under Apache. Follow current Joomla and WordPress security tips. Increase the isolation of the individual sites by using mpm-itk, which will allow each website to run as its own user. The alternative to this is to run virtualization software such as the Xen hypervisor. Each site would have its own, virtual Linux system. I lack the experience needed to make this decision and I am asking which path to take. Obviously, there may be other alternatives that I haven't considered.

    Read the article

  • Thumbnail preview rotator on mouse-over - what is it called?

    - by Gerben
    I've been searching for hours now but can't get any tutorial on this. I have these thumbnails on my homepage which are the first images of their corresponding posts. What I want is that when I mouse-over a post on the homepage... the corresponding thumbnail should also show/rotate the other images of that particular post... bit like a sneak-peak image rotator... Does anyone know where I can get a tutorial regarding this? How's this called? It seems like I'm searching for the wrong keywords on google as I can't find anything.

    Read the article

  • After replacing all tables in an old website with divs, what other steps should I take?

    - by guisasso
    I have designed a website a few years back, and it ranks pretty well, customer is happy, no problems there. I took one of the pages and replaced manually all the tables with divs, used structured data and got the website to look exactly the same. I would like to know, what other steps should I take to improve or at least not hurt this page's rank, or if perhaps I should juts not bother altogether. What are best practices here? The page is not live yet. Thanks.

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • I think there is a problem with my url encoding

    - by TheGateKeeper
    I took someone's advice and started encoding my image sources, but Google doesn't seem to be able to decode them. I probably did something wrong, because basically Google is taking the full path as the image's name. See this page as an example. If you go on the top most thumbnail and do "Save as", you will see the path is not being decoded. Should I stop decoding or am I doing it wrong? Should I encode only the image name itself? Thanks!

    Read the article

  • htaccess url rewrite

    - by user761396
    I used to have a rewrite rule to RewriteRule ^([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] Now, i have to change to RewriteRule ^1/([^/]+)\.htm$ index.php?c=$1 [NC] RewriteRule ^1/([^/]+)\.htm/([0-9.]+)$ index.php?c=$1&amt=$2 [NC] RewriteRule ^2/([^/]+)\.htm$ index2.php?c=$1 [NC] RewriteRule ^2/([^/]+)\.htm/([0-9.]+)$ index2.php?c=$1&amt=$2 [NC] (the difference is adding a subdirectory) My question is how to redirect my old ones to the 1/ subdirectory? thank you

    Read the article

  • Serve up syntactic XHTML5 using the text/html MIME type?

    - by cboettig
    I have a site currently written with HTML5 tags. I'd like to be able to parse the site as XML, with support for namespaces, etc, to facilitate programmatic extraction of data. Currently I have <!DOCTYPE html> and <meta charset="utf-8"> Which I gather is equivalent in HTML5 to explicitly setting the content-types as <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> for my current setup. In order to serve XML it sounds like the right thing to do is <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> Should I also change my Content-Type to <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Or is that not necessary? What is the advantage of having content-type be "application/xhtml+xml"? What is the disadvantage? (Sounds like it may break internet explorer rendering of the site? but maybe that information is out of date now?) Many thanks!

    Read the article

  • What causes Google Analytics tags to work on some machines but not on others?

    - by Dallas
    The title of this question says it all. I am trying to update my code from the deprecated _getTracker() method to _createTracker(), but am experiencing inconsistent results. I have tried traditional and Asynch methods using a JSP include, but they all have the same result. My pageviews, and others in the office all show up in analytics. I have tried various test cases, but their visits are just not registering at all for them, but are showing up for me. The client has tried on multiple machines, and I have walked through with them step by step, so I know it's not just user error. I know that javascript being turned off will cause the tags not to work, but I am wondering if what else might cause the tags to not be recognized. I would appreciate any and all ideas.

    Read the article

  • Is there a way to setup Clicktale tag in Google Tag Manager?

    - by Cubius
    Since GTM doesn't support document.write() method the standard clicktale code doesn't work. Is there a workaround for this? ClickTale employee has sent me these instructions: Replace the document.write JS line above with the following: document.body.appendChild(externalScript); Example: <!-- ClickTale Bottom part --> <script type='text/javascript'> var externalScript = document.createElement('script'); var scrSrc = document.location.protocol=='https:'? 'https://clicktalecdn.sslcs.cdngc.net/': 'http://cdn.clicktale.net/'; scrSrc += 'www11/ptc/xxx-xxx-xxx-xxx.js'; externalScript.src = scrSrc; externalScript.type = 'text/javascript'; document.body.appendChild(externalScript); </script> <!-- ClickTale end of Bottom part --> I am not sure what to do with this. Has someone tried something like this?

    Read the article

  • Good site building for little kids [closed]

    - by guy mograbi
    I am teaching kids to write 3d games with Unity. Now I want to publish their games online along with some other stuff. I don't want to teach them HTML, CSS etc.. I don't mind buying the domain (after a bad experience with "TK" domains I concluded buying one is better), so all I need it hosting and possibly with a builder with a nice interface. Couldn't find one which seems to be the right fit. Can you recommend of anyone? Static HTML hosting will do, but I prefer PHP support and DB just in case we will need to implement a login mechanism.

    Read the article

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • Why am I getting this message "Some important page is blocked by robots.txt"?

    - by Rounak
    My site's URL is: www.hackinguniverse.org From some day in Google Webmaster tool a message is showing that says "Some important page is blocked by robots.txt". My robots.txt is: User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: http://www.hackinguniverse.org/feeds/posts/default?orderby=updated Now for other information - I hosted this website on blogger. I have some other sitemaps too where I had only included "/Search" in Disallow like in this robots.txt file. But those sites are ok. I mean no message is showing on those. So why am I getting that message telling that I have blocked some important page via robot.txt?

    Read the article

  • SEO - folder or file [closed]

    - by ErmSo
    Possible Duplicate: Should I use a file extension or not? I'm creating a website with a number of pricing options. Each price plan has it's own page and there is also a comparison page. As far as SEO is concerned, which of the following is better? or does it not make a difference? Option one - folders /pricing/plans /pricing/plans/free Option two- files /pricing/plans.php /pricing/free-plan.php

    Read the article

  • Site returning 404 header to google, not sure why

    - by Damon
    A Drupal site that works fine for regular users returns a 404 not found error when I try to use the W3C validator on it; it is also not being indexed by google at all (which is the main issue but I suspect there is a connection). It is a https:// site with .htaccess rule to redirect any http:// request to the https://. I had had it running in google webmaster tools and thought it was fine, but it turns out I had not added the https domain. After adding the https domain it's also returning the header as HTTP/1.1 404 Not Found Date: Mon, 15 Oct 2012 19:37:43 GMT Server: Apache Expires: Sun, 19 Nov 1978 05:00:00 GMT Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0 Robots.txt just has User-agent: * Crawl-delay: 10 # Files Disallow: /cron.php How can I check what the issue is here?

    Read the article

  • best way to import 500MB csv file into mysql database?

    - by mars
    I have a 500MB csv file that needs to be imported into my mysql database. I've made a php file where i can upload the csv file and it analyses the fields n stuff and does the actual importing. but it can only handle small fiels max 5mb. so that's a 100 files and actually pretty slow(uploading) is there another way? I have to repeat this process every month because the data in the file changes every month it's about 12 000 000 lines :D

    Read the article

  • Google not showing any pages from my site in the index after three months [on hold]

    - by Alex Coisman
    Despite having a sitemap and using Google Webmaster Tools, it has been over 3 months and my site has not been added to the Google index at all. Here's the site: www.famouslefthandedpeople.com As far as I know, I have done everything correctly. However, there must be something I am overlooking that is preventing Google from indexing the site. I do not have a robots.txt file, so allow/disallow isn't the issue. Although the content of the site is sparse, it is original and not duplicated internally or externally so Panda/Penguin should not be a problem. I have reviewed the answers at Why isn't my website in Google search results? and I don't think it applies here. If it matters, I am using WordPress to create the site. What other factors should I be looking at in order to troubleshoot this?

    Read the article

  • 100% APC Fragmentation - Cacherouter & Pressflow install

    - by granttoth
    My APC cache has a 100% fragmentation. I'm not quite sure I understand what is going on here. For testing I jacked the available memory up to 512. After a day the total available free memory shows 73% but I still have 100% fragmentation. Would you gurus please look at my settings and offer your advice? Oh, I have read people suggest that I disable apc.stat when possible but when I do the site crashes. I am using the Pressflow build of Drupal 6 with the cacherouter module installed. Edit: (added screenshot) http://i.imgur.com/DqZEX.png

    Read the article

  • How to create repeatable table with unique ID's using jQuery

    - by milbert
    I need to create a table structure that can be "copied" and populated with a new set of data. However, each table must have unique IDs for functions that must access them later. For example: <table class="main"> <thead><tr><th class="header"></th></tr></thead> <tbody> <tr class="row"><td class="col0"></td><td class="col1"></td></tr> </tbody> </table> My current thought is to use jQuery to load the table from a seperate html file into a variable. Using this saved table I could then create a function that copies it, traverses the table to add an ID to each section where information will need to be appended from a seperate data source, and return this new table. I am new to jQuery and feel like I may be missing an easier/better way to accomplish this. Any help on this subject would be appreciated.

    Read the article

  • Simple mod_rewrite Question

    - by user5358
    Hello, I want to have everything that looks like this: /1/2/3/4/5/[...] to redirect to this: /index.php?u=/1/2/3/4/5/[...] unless the requested string is a specific file. So anything that doesn't have a ".", I want to redirect to "index.php?u=[...]". I'll then parse the URI segments in PHP to determine what the user is requesting. I've been looking around for how to do this, but have only a very rough understanding of regular expressions and have been unable to find an example of how to do it. Thanks!

    Read the article

  • Google Scholar Realted Question

    - by Art
    I have just requested Google Scholar to use my web site for collecting papers from my personal web site: http://cs.uic.edu/~asmirnov/publications.html I was wondering if I did everything right: I submitted a request on the form provided on scholar web site I published the papers in PDF on my web site Is there anything else needed for Google to index my web site? Other questions are: 1. The first paper (link to it) is not to just paper, but to the whole issue. 2. Are there any tages to be added on my web site, if so, then which and how do I add them? 3. What are those exporting options available on google scholar web site and how do they work? Thank you very much for being patient with me and my questions as well.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >