Search Results

Search found 9717 results on 389 pages for 'pro'.

Page 219/389 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Tracking 502 bad gateway error

    - by dasickle
    I moved my Wordpress site to WP Engine and now I constantly get 502 errors. I spoke with support and they said that its because I have a lot of DB queries. I ran some tests and my frontpage only has 95 queries and page size is about 500kb. Most inner pages are around 60 queries. All queries are very short. Some people tell me its common with WP Engine because they run nginx. Why do I keep getting these errors and is there a way to track how many of them happen on daily basis? P.S. WP Engine log is empty so cant see the 502's there.

    Read the article

  • changed plesk root name, what DNS settings get modified?

    - by NRGdallas
    we recently changed our plesk server's main URL from siteold.com to sitenew.com. many websites had their NS set to ns1.siteold.com - does plesk automatically update that to need ns1.sitenew.com? should I change the godaddy settings? attempting to change them states "Nameserver Not Registered" - is this simply the delay required? lastly, when adding a new domain to plesk, one would simply need to adjust the nameserver for that site in godaddy to ns1.sitenew.com or ns1.newdomain.com? (does plesk have a centralized name server, or does each site acquire its own?)

    Read the article

  • Schema.org for Product Reviews

    - by Lynda
    I have a product reviews on a site and I am adding schema.org markup to the reviews. Here is the code I am using: <div class="blockquote-wrap"> <blockquote itemprop="review" itemscope itemtype="http://schema.org/Review"><span itemprop="reviewBody">Text of the review itself.</span> <cite><span itemprop="author">Author Name</span>, Location of Author</cite> </blockquote> </div> This is all the reviews are. When I test the page using Google's Structured Data Testing Tool I receive this error: Error: Incomplete microdata with schema.org. My question is what data is missing that is required? I don't see which data is required on the Schema.org page for reviews.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Are the contents in the front page considered as duplicate of the post?

    - by yibe
    I asked this same question on stackoverflow, but closed being off topic. Therefore, I am posting it here. In Wordpress blogs, the front page of the blog will display many posts in whole or excerpts. When the link to the post is clicked, the content will be opened with an other template file(single.php). Can we say that the content displayed in the front page and the post pages are considered as duplicate? Does it harm SEO in any way?

    Read the article

  • Find last of match string automatically

    - by jowan
    I want to make id for entries as long as 7 digits.. while first entry is created, it will get id is 0000001 And my problem is i want to get id and add to 1 every time new entry is created.. I have a bunch of code and still confuse to implement it. $str_rep = "0000123"; $str_rep2 = "0005123"; // My character string can be like this $str_rep3 = "0009123"; // My character string can be like this $match_number= array(1,2,3,4,5,6,7,8,9); // I create array to do it automatically but it was not work. // I do it manually $get_str = strstr($str_rep, "1"); $get_str = strstr($str_rep2, "5"); $get_str = strstr($str_rep3, "9"); // Result echo $get_str . "<br>"; echo $get_str2 . "<br>"; echo $get_str3 . "<br>"; Thanks in advance

    Read the article

  • Why is MediaWiki auto-linking the word “files”

    - by dfrankow
    Our MediaWiki installation is auto-linking the word "files". So Here are some files: a, b, c would result in the word "files" being linked to http://ourhost/mediawiki/files. Why is that happening and how do I make it stop? I can use the nowiki tag, but perhaps it does not surprise you that the word "files" appears often, and it is aggravating to use that tag all the time. Here is some info on our MediaWiki installation from Special:Version. Yes, it's old. Installed software Product Version MediaWiki 1.16.5 PHP 5.2.14-pl0-gentoo (apache2handler) MySQL 5.0.84 Installed extensions Parser hooks GoogleDocs4MW (Version 1.1) Adds tag for Google Docs' spreadsheets display Jack Phoenix SyntaxHighlight (Version 1.0.8.6) Provides syntax highlighting using GeSHi Highlighter Brion Vibber, Tim Starling, Rob Church and Niklas Laxström WebServiceSequenceDiagram(Version 1.0) Render inline sequence diagrams using websequencediagrams.com Eddie Olsson Other MWSearch MWSearch plugin Kate Turner and Brion Vibber Extension functions efLucenePrefixSetup Parser extension tags gallery, googlespreadsheet, html, nowiki, pre, sequencediagram, source and syntaxhighlight Parser function hooks anchorencode, basepagename, basepagenamee, defaultsort, displaytitle, filepath, formatdate, formatnum, fullpagename, fullpagenamee, fullurl, fullurle, gender, grammar, int, language, lc, lcfirst, localurl, localurle, namespace, namespacee, ns, nse, numberingroup, numberofactiveusers, numberofadmins, numberofarticles, numberofedits, numberoffiles, numberofpages, numberofusers, numberofviews, padleft, padright, pagename, pagenamee, pagesincategory, pagesize, plural, protectionlevel, special, subjectpagename, subjectpagenamee, subjectspace, subjectspacee, subpagename, subpagenamee, tag, talkpagename, talkpagenamee, talkspace, talkspacee, uc, ucfirst and urlencode

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • Chinese bots in my forum

    - by TdotThomas
    I have a small community forum that doesn't really get posts or any real traffic. The only thing that happens on the regular is bots with Chinese IPs signing up gibberish usernames. Most bots don't make it past the captcha but some do. I try to stay on top of this by banning IPs and ranges of IPs but it doesn't really seem to help. The bots never post anything so what are they doing? Should I be worried? Should I keep banning IPs or is it futile?

    Read the article

  • Google search preview shows content not on the website

    - by SDG
    My website google search entry is messed up. In the preview in google search results, I get things like cracks, serials, random ip addresses. I scanned all files and my computer for viruses and malware and could not find anything. I also tried to download and reupload all content from a friend's computer and still that content persists. I also scanned the source code of all files, but the content does not appear in any file. Google also does not detect any malware on the website, as seen in their webmaster tools. I have searched using the same keywords in other search engines such as bing and yahoo and the search results there are fine. I am quite clueless as to what the causes would be for this and what would be a possible remedy.

    Read the article

  • How an offline main domain can influence traffic on an active sub domain

    - by danie7L T
    The website(s) design is for a company active in 3 different areas. As an example lets use the following structure: www.example.com [sub1.example.com] [sub2.example.com] [sub3.example.com] sub2.example.com and sub3.example.com are ready to go live but www.example.com really isn't and send a 503 http error code. I would like to know if this situation will affect the traffic and ranking of the subdomains ready to go live? Is it preferable to wait and go live with the main domain? Or there is nothing to "fear" and one doesn't affect the other? Thank you

    Read the article

  • Google analytics - vistor path to specific site destination setup and monitoring?

    - by Joshc
    I have a website which I am using google analytics to track visitors and track our banner campaigns. We're are promoting 'Purchase Ticket' buttons on our website which push visitors to a third party website who sell and distribute our tickets. The url on all the 'Purchase Ticket' buttons are the same through out the site... Example: http://ticketmaestro.com/events/my-event-2012 In the analytic control panel, is it possible so set something up, where I create a path-to-destination using the above example url? ...and then after this is setup: I want to be able to monitor the path visitors are taking from when they reach the site - to when they click the 'Purchase Ticket' button. Graphs will show... Start Destination Path to Final Destination Final Destination: http://ticketmaestro.com/events/my-event-2012 Any help, suggestions, terminology would be great thanks. Josh

    Read the article

  • Does Google counts backlinks from homepage to inside pages?

    - by SharkTheDark
    I have a site with good PR and my inside pages are getting increase of PR, but they don't have links pointing to them, only from my homepage. Does that means that Google counts ALL links on my homepage, including links to inside pages? Does it calculate inside pages PR with one coming from my domain, my homepage, too? Also, if inside pages that got high PR from homepage have link back to homepage, will that increase homepage PR additionally, since those links should count too? By Google PR algorithm formula, by calculations on Wikipedia and Stanford PR algorithm explanation ( which is originally developed by ) it counts those links, and also it counts after-increase backlink again, making few times circle ( it stops because of d ( 0.85 ) factor. ), but it counts them. Does anyone know is this correct?

    Read the article

  • Website creation preparation [closed]

    - by Loki
    I am in the pre-coding phase of creating a website. I know that it will be account based (users have to register/login to use the features). I also know that the server will have to do certain operations that are timer based, that is to say that user will have events that will trigger at a point chosen by the user and do something. I am searching for a good choice in server-side technology, and was wondering what my options are and what the best choice is. I would prefer open technology and something that doesn't use interpreted languages (Java, .net). My first thought is PHP + PGSQL for serverside and HTML+CSS+JS for clients, but I am still looking at my options.

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Should I add rel nofollow to internal links which already have meta noindex?

    - by CamSpy
    Let's say I have a products page with listing producsts and the page has pagination. I would like the 1st page to have all the SE ranking weight so I decided to put meta noindex on the rest of the paginated pages (from page 2 to N). My common sense says that if I don't want pages to not get indexed, I shouldn't also pass link/PR juice to these pages. (Is that smart?) What happens if I set rel="nofollow" for all pagination links from page 2 to page N?

    Read the article

  • Rails backend: debugging [closed]

    - by banditKing
    I have a rails -API app with Rabl. Im trying to build a photo sharing app. Im getting status 500 codes when my client communicates with the server. Im trying to find out how to debug this. The client is an iOS app I wrote. Where should I begin the debugging process and what are the best tools for debugging rails-api backend apps. Im new to server development so trying to learn the tricks of the trade. Any help would be appreciated. Thanks

    Read the article

  • Google indexed my home page incorrectly: How can I fix it?

    - by louis_coetzee
    I finished my website and launched it, I think I had a problem with my robots.txt - so I changed it to look like this: 03/08/2012 # Allows all bots Sitemap: http://www.mysite.co.za/sitemap.xml User-agent: * Disallow: /dashboard/ When I google my domain.co.za - I get this back: Home A description for this result is not available because of this site's robots.txt – learn more. You've visited this page 3 times. Last visit: 2012/08/15 Now since I fixed this and added a 301 redirect to redirect mysite.co.za to www.mysite.co.za I would love it if google bot would come do a visit. Is there anything I can do to get this fixed?

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • Redirecting from blogger to custom domain [closed]

    - by mdhar9e
    Possible Duplicate: How to have a blogspot blog in my domain? i have a blog from blogger named as www.myclipta.blogspot.com. i am updating regulary. Then i bought a custom domain with myclipta.com. Now i want to redirect from blogger domain to my custom domain. i don't know how to do this . i heard that to set dns name servers and CNAME..But i am not able to do this.. can any one can guide me please..

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Why does Bing not show my adcenter ads though there is enough space

    - by gamma
    I created several campaigns using the MS adcenter. I'm targeting the whole world at any time with 2-3 placement texts per keyword group. The bids I placed are sometimes quite high, so they should get displayed. When I try to search for my keywords in Bing nothing gets displayed, though there is plenty of space for it. Bing mostly displays 2-3 ads, but the ones at the right side rather seldom. I'd like to know, how I can improve the fact that my ads are not being displayed - without increasing the bids any further.

    Read the article

  • Domain name made of keywords redirecting to main website's page

    - by ivanivan
    Let's say I have a website called books.com where I sell books. I've read on Redirecting different domains to your main site that it's not a bad idea to register another domain that does a 301 redirect to my website, like booksforsale.com. Now, say I want to only target a specific category withing my website, like books.com/sci-fi/ so I register sci-fi-books.com and do a 301 redirect. Would this improve my search rankings? Thanks.

    Read the article

  • Visual sitemap generater

    - by rugbert
    Im looking for a something to visually create a sitemap for one of my websites. Id like something in a tree structure, so I have the hierarchical view of my site. A couple requirements I have tho, the ability to map password protected pages, and (not REALLY a requirement) the ability to integrate google analytics data. Im trying a evaluation version of powermapper, but the version that includes analytics integration is like $300 so Im looking for something cheaper.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >